专利摘要:
METHOD IMPLEMENTED IN COMPUTER, COMPUTER SYSTEM, MOBILE MACHINE, COMPUTER-READABLE STORAGE MEDIA, AND COMPUTER SYSTEMPerformance information indicative of the performance of the operator of a mobile machine is received. A performance opportunity space is identified, indicative of possible performance improvement. The savings identified in the performance opportunity space are quantified.
公开号:BR112016013737A2
申请号:R112016013737-0
申请日:2014-12-10
公开日:2021-05-18
发明作者:Dohn W. Pfeiffer;Sebastian Blank;John F. Reid;Timothy A. Deutsch;Alex D. Foessel
申请人:Deere & Company;
IPC主号:
专利说明:

[001] [001] The present application is related to U.S. Patent Application Serial No. 14/546,725, co-pending, filed November 18, 2014 and entitled AGRONOMIC VARIATION AND TEAM PERFORMANCE ANALYSIS, the full description of which is incorporated by reference herein. The present application is related to U.S. Patent Application Serial No. 14/445,699, co-pending, filed July 29, 2014 and entitled OPERATOR PERFORMANCE OPPORTUNITY ANALYSIS, the full description of which is incorporated by reference herein. The present application is related to U.S. Patent Application Serial No. 14/271,077, co-pending, filed May 6, 2014 and entitled OPERATOR PERFORMANCE RECOMMENDATION GENERATION, the full description of which is incorporated by reference herein. The present application is related to U.S. Patent Application Serial No. 14/155,023, co-pending, filed January 14, 2014 and entitled OPERATOR PERFORMANCE REPORT GENERATION, the full description of which is incorporated by reference herein. DESCRIPTION FIELD
[002] [002] This description refers to mobile equipment. More specifically, the present description relates to identifying performance opportunities to improve performance in the operation of mobile equipment. FUNDAMENTALS
[003] [003] There are a wide variety of different types of equipment that are operated by one operator. Such equipment may include, for example, agricultural equipment, construction equipment, equipment
[004] [004] There are currently some methods in existence that allow operators or managers of agricultural equipment to obtain instrument panel information indicative of the operation of a piece of agricultural equipment. This information is user informational in nature.
[005] [005] The above discussion is merely provided for general background information and is not intended to be used as an aid in determining the scope of the claimed subject matter. SUMMARY
[006] [006] Performance information indicative of the performance of the operator of a mobile machine is received. A performance opportunity space is identified, indicative of possible performance improvement. Identified savings in the performance opportunity space are quantified.
[007] [007] This summary is provided to introduce a selection of concepts in a simplified form, which are further described below in the detailed description. This Summary is not intended to identify key features or essential features of the claimed subject, nor is it intended to be used as an aid in determining the scope of the subject.
[008] [008] Figure 1 is an exemplary block diagram of an operator performance computing architecture.
[009] [009] Figures 2A and 2B (collectively Figure 2) consist of a more detailed block diagram of the architecture shown in Figure 1.
[0010] [0010] Figure 3 is a flowchart illustrating a modality of operation of the architecture shown in Figures 1 and 2, when computing performance data indicative of the performance of an operator.
[0011] [0011] Figure 4 shows an embodiment of a reference data storage in greater detail.
[0012] [0012] Figure 4A is a flowchart illustrating an exemplary mode of operation of a recommendation engine.
[0013] [0013] Figures 5A-5G are even more detailed block diagrams of different channels to generate different performance pillar scores.
[0014] [0014] Figure 6A is a flowchart illustrating a means by which rules can be configured to generate recommendations.
[0015] [0015] Figures 6B-6E are graphs plotting the degree of compliance of a parameter corresponding to a rule versus a parameter measurement.
[0016] [0016] Figure 6F is a flowchart illustrating a mode of operation of the recommendation engine in generating recommendations.
[0017] [0017] Figure 6G is an exemplary user interface view, which illustrates an exemplary operator performance reporting format.
[0018] [0018] Figures 6H-6T show additional examples of user interface displays.
[0019] [0019] Figure 7 is a block diagram of an example of a
[0020] [0020] Figure 7A shows an example of a graphical illustration of a continuum of performance and financial opportunity.
[0021] [0021] Figure 8 is a flowchart illustrating an example of the operation of the system shown in Figure 7.
[0022] [0022] Figure 9 is a flowchart illustrating an example of the operation of the performance and financial analysis system in Figure 7 in more detail.
[0023] [0023] Figure 10 is a flowchart illustrating an example of the operation of the system shown in Figure 7 in identifying a performance opportunity space.
[0024] [0024] Figure 10A is an example of a user interface display.
[0025] [0025] Figure 10B is an example of a user interface display.
[0026] [0026] Figure 11 is a flowchart illustrating an example of the operation of the system shown in Figure 7 in identifying a financial opportunity space.
[0027] [0027] Figure 12 is a block diagram of an example of an agronomic variation architecture.
[0028] [0028] Figure 13 is a flowchart showing an example of the operation of the architecture shown in Figure 12.
[0029] [0029] Figure 14 is a block diagram of an example of a team analysis architecture.
[0030] [0030] Figures 15A and 15B (collectively referred to as Figure 15) show a flowchart of an example of the operation of the architecture shown in Figure 14.
[0031] [0031] Figure 16 is a block diagram showing a modality of the architecture shown in Figures 1, 2, 7, 12 and 14 arranged in
[0032] [0032] Figures 17-22 show various modalities of mobile devices that can be used in the architectures shown in Figures 1, 2, 7, 12, 14 and 16.
[0033] [0033] Figure 23 is a block diagram of an illustrative computing environment, which can be used in Figures 1, 2, 7, 12, 14 and 16. DETAILED DESCRIPTION
[0034] [0034] Figure 1 is a block diagram of an embodiment of a performance reporting architecture 100. Architecture 100 includes a mobile machine 102, a data evaluation layer 104, a pillar score generation layer 106, and a pillar score aggregation layer 108. Layer 108 generates performance reports from operator 110 and may also generate real-time, closed-loop (or asynchronous) control data 112 that can be provided back to farm machine 102. architecture 100 is also shown having access to a reference data store 114.
[0035] [0035] In the embodiment shown in Figure 1, the mobile machine 102 is described as being an agricultural machine (and specifically a harvester), but this is just an example. This could be another type of mobile agricultural machine as well, such as a tractor, a seeder, a cotton harvester, a cane harvester, or others. Also, this can be a machine used in lawn and forestry industry, construction industry and so on. Mobile machine 102 illustratively includes raw data sensing layer 116 and derived data computation layer
[0036] [0036] The raw data sensing layer 116 illustratively includes a plurality of different sensors (some of which are
[0037] [0037] Derived data 120 is provided to the data evaluation layer 104. In one embodiment, the data evaluation layer 104 compares the derived data 120 with reference data stored in the reference data store 114. The reference data may be data from operator 101 historically, or from a variety of other sources, such as data collected for operators in the fleet for a single farm employing operator 101, or from relevant data obtained from other operators as well. The data evaluation layer 104 generates evaluation values 122 based on an evaluation of how the derived data 120 for the operator 101 compares with the reference data in the reference data store 114.
[0038] [0038] Assessment values 122 are provided to the score generation layer pillar 106. Layer 106 illustratively includes a set of score calculators, which calculate a performance score 124 for each of several different performance pillars (or categories of scores). performance) which can be used to characterize the performance of operator 101 when operating agricultural machine 102. The particular performance pillars, and associated scores 124 are described in more detail below.
[0039] [0039] Each of the pillar scores 124 is provided to the tier of
[0040] [0040] In one embodiment, layer 108 also generates real-time (or asynchronous) closed-loop control data 112 that can be fed back to the farm machine 102. Where the data is fed back in real time, it can be used to adjust operation, settings, or other control parameters for the agricultural machine 102 in use in order to improve overall performance. These can also be used to display information to operator 101 indicating operator performance scores, along with recommendations for how operator 101 should change settings, control parameters or other operator inputs in order to improve his performance. The data may also be illustratively provided asynchronously, in which case it may be transferred to the farm machine 102 intermittently, or at preset times, in order to modify the operation of the farm machine 102.
[0041] [0041] Therefore, as described in more detail below, there can be, for example, three different user experiences for the information generated here, each with its own set of user input displays and corresponding functionality. The first can be a real-time or near real-time user experience that displays individual operator performance information to the operator (such as in an inbuilt application running on a device in an operator compartment of the mobile machine 102). This can show, among other things, a comparison of operator performance scores compared to scores for a reference group. The reference group can have previous scores for the operator itself, scores for other operators in the fleet, or scores for other operators in other fleets in a similar crop or geographic region, or both. It can show real-time data, recommendations, alerts, etc. These are just examples.
[0042] [0042] A second user experience may include displaying information to a remote farm manager. This can be done in near real time and on demand. It can itself summarize the performance of the fleet, and it can also display performance compared to other reference groups, or in other ways. This could be in an application built into the farm manager's machine, or elsewhere.
[0043] [0043] A third user experience might include displaying the information as a fleet scorecard at the end of the season. This experience can show information on fleet performance and financial impact. It can show summaries, analysis results, comparisons and projections. It can generate recommendations to form a plan for the next season that has a higher operational and financial trajectory, as examples.
[0044] [0044] Each of these user experiences can include a set of user interfaces. Those interfaces that have associated
[0045] [0045] Before writing the overall operation of architecture 100, a more detailed block diagram of an architecture modality will be described. Figures 2A and 2B are collectively referred to as Figure 2. Figure 2 shows an embodiment of a more detailed block diagram of architecture 100. Some of the items shown in Figure 2 are similar to those shown in Figure 1 and are similarly numbered.
[0046] [0046] Figure 2 specifically shows that the raw data sensing layer 116 in the agricultural machine 102 illustratively includes a plurality of machine sensors 130-132, along with a plurality of environment sensors 134-136. Raw data sensing layer 116 can also obtain raw data from other machine data sources 138. By way of example, sensors 130-132 can include a wide variety of different sensors that sense machine operating parameters and conditions in the machine. agricultural machine 102. For example, these may include velocity sensors, mass flow sensors that measure the mass flow of product through the machine, various pressure sensors, pump displacement sensors, tool sensors that sense various parameters of tool, fuel consumption sensors, among a wide variety of other sensors, some of which are described in more detail below.
[0047] [0047] Environment sensors 134-136 may also include a wide variety of different sensors that sense different things relating to the environment of the machine 102. For example, when the machine 102 is a type of harvest machine (such as a combine), sensors 134-136 may include crop loss sensors that sense a
[0048] [0048] Other sources of machine data 138 can include a wide variety of other sources. For example, these may include systems that provide and record alerts or warning messages relating to machine 102. They may also include the count and category of each warning, diagnostic code or alert message, and may also include a wide variety of other information .
[0049] [0049] The machine 102 also illustratively includes the processor 140 and a user interface display device 141. The display device 141 illustratively generates user interface displays (under the control of the processor 140 or other component) which allow the user 101 performs certain operations with respect to machine 102. For example, the user interface displays on device 141 may include user input mechanisms which allow the user to enter authentication information, start the machine, configure certain operational parameters to the machine, or otherwise control the machine 102.
[0050] [0050] In many agricultural machines, data from sensors (such as from the raw data sensing layer 116) is illustratively communicated to other computational components within the machine 102, such as the computer processor 140. The processor 140 is illustratively a computer processor with associated memory and timing circuitry (not shown separately). It is illustratively a functional part of the machine 102 and is activated by and facilitates the functionality of other layers, sensors or components or other items in the machine 102. In one embodiment, signals and messages from the various sensors at layer 116 are communicated using a bus controller area network (CAN). Then, data from the raw data sensing layer 116 is illustratively referred to as CAN data 142.
[0051] [0051] The CAN data 142 is illustratively provided to the derived data computation layer 118 in which a number of computations are performed on that data to obtain the derived data 120 which is derived from the sensor signals included in the CAN data 142. Derived data computing layer 118 illustratively includes derivation computation components 144, estimation components 146, and may compute other computation components 148. Derivative computation components 144 illustratively calculate some of the derived data 120 based on CAN data 142 Derivative computation components 144 can illustratively perform fairly straightforward computations such as averaging, computing certain values as they occur over time, plotting those values on various graphs, calculating percentages, and so on.
[0052] [0052] In addition, derivation computation components 144 illustratively include windowed components that interrupt input data sensor signals in time windows or time frames that
[0053] [0053] Regardless of the type of components 144, 146 and 148 in layer 118, it will be seen that layer 118 illustratively performs computations that require relatively light processing and memory overhead. Then, in one embodiment, layer 118 is disposed on machine 102 (such as on a device located in the cabin or other operator compartment of machine 102) or on a portable device or other mobile device that can be accessed on machine 102 by the user.
[0054] [0054] In any case, the derived data 120 is obtained from layer 118 and provided to data evaluation layer 104. Again, this
[0055] [0055] Layer 104 includes comparison components 150, classifier components 152, other components 154 and processor
[0056] [0056] Ranked 122 evaluation values are then provided to pillar score generation layer 106. In the embodiment shown in Figure 2, pillar score generation layer 106 includes performance pillar score generators 160, pillar score generators of support 162 and processor 163. Performance pillar score generators 160 illustratively include generators that generate pillar scores corresponding to performance pillars that best characterize operator 101's overall performance in various performance categories. In one modality, pillar performances are generated by productivity,
[0057] [0057] It can be seen that, in the present embodiment, the performance pillar score generators 160 include productivity score generator 164, energy utilization score generator 166, fuel consumption score generator 168, score generator material loss score 170 (eg grains), and material quality score generator 172 (eg grains). The supporting pillar score generators 162 illustratively include the logistics score generator 174 and the uptime information generator 176.
[0058] [0058] As an example, the productivity score generator 164 can include logic to generate a score based on an evaluation of a productivity versus yield slope on the evaluation values 122.
[0059] [0059] The energy utilization score generator 166 illustratively considers information output by the fuzzy logic classifiers 152 in layer 104, which are indicative of an evaluation of the energy of the tool used by the machine 102, under the control of the user (or operator )
[0060] [0060] The fuel economy score generator 168 can be a logical component that considers various aspects related to fuel economy, and issues a score based on those
[0061] [0061] The material quality score generator 172 illustratively includes evaluation values 122 produced by the fuzzy logic components 152 in the data evaluation layer 104 that are indicative of a material evaluation other than the grain that was harvested, if the product harvested (such as corn or wheat) is broken or cracked, and whether the harvested product includes foreign material (such as cob or straw) and may also include assessment values 122 that relate to the size and quality of waste expelled from the machine 102.
[0062] [0062] The logistics score generator 174 may include logic that evaluates the performance of the machine 102 during different operations. For example, it can assess machine performance (under user operation 101) during unloading, during harvesting, and during idling. This can include measures such as the distance the machine has traveled in the field and on the road, an individual percentage of break in terms of total time, field configuration (passages versus cliffs) and other information. This, however, is an example.
[0063] [0063] The uptime information generator 176
[0064] [0064] All pillar scores and supporting pillar scores (indicated by 124 in Figure 2) are illustratively provided to pillar score aggregation layer 108. Layer 108 illustratively includes an aggregator component 180, composite score generator 182, recommendation tool 184 (which accesses recommendation rules 185), processor 186, and report generator 188. Aggregator component 180 illustratively aggregates all pillar scores and pillar scores 124 using a weighting applied to each score. Scoring can be based on user preferences (such as if the user indicates that fuel economy is more important than productivity), these can be standard weights, or they can be a combination of standard weights and user preferences or others weights. Similarly, weighting can vary based on a wide variety of other factors, such as crop type, crop conditions, machine setup, or other things.
[0065] [0065] An aggregator component 180 aggregates and weights the pillar scores 124, the composite score generator 182
[0066] [0066] Once the composite score and recommendations are generated, the report generator component 188 generates an operation performance report 110 indicative of the performance of the operator 101. The report generator component 188 can access the composite score, the performance pillar scores, all underlying data, recommendations, location and mapping information, and other data. Operation performance report 110 may be generated periodically, at the request of a manager, at the request of operator 101, or other user, may be generated daily, weekly, or in other ways. It can also be generated on demand while the operation is in progress. In one embodiment, operation performance report 110 illustratively includes a composite score 190 generated by composite score generator 182 and recommendations 192 generated by recommendation tool 194. Layer 108 may also illustratively generate control data 112 that is passed back to machine 102 to adjust the control of machine 102 to improve overall performance.
[0067] [0067] The operation performance report 110 can, in one mode, be loaded into a device such that it can be viewed in real time by the operator 101, in the vehicle operating compartment 102, or can be viewed in real time by a manager of
[0068] [0068] Figure 3 is a flowchart illustrating an overall operation modality of the architecture shown in Figure 2, when generating an operation performance report 110. Figure 3 will now be described in conjunction with Figures 2 and 4. Then, Figures 5A -5G will be described to show a more detailed modality of portions of the architecture 100 used to generate performance pillar scores.
[0069] [0069] In one embodiment, the processor 140 first generates a startup display on the user interface display device 141, to allow the user 101 to start the machine 102. Displaying the startup display is indicated by block 200 in Figure 3 User 101 then enters identifying information (such as authentication information or other information). This is indicated by block 202. User 101 then begins to operate machine 102. This is indicated by block 204.
[0070] [0070] As the user 101 is operating the machine, the sensors in the raw data sensing layer 116 sense the raw data and provide signals indicative of that data to the derived data computing layer 118. This is indicated by block 206 in the flowchart of the Figure
[0071] [0071] Derived data 120 is then generated by components 144, 146 and 148 in layer 118. The derived data is illustratively derived such that data evaluation layer 104 can provide evaluation data used in generating the pillar scores. Deriving the data for each column is indicated by block 220 in Figure 3. This can include a wide variety of computations, such as filtering 222, plotting 224, windowing 226, estimating 228, and other computations 230.
[0072] [0072] Derived data 120 is then provided to data evaluation layer 104 employing comparison components 150 and fuzzy logic classifier components 152. Providing the data to data evaluation layer 104 is indicated by block 232 in Figure 3 These may be provisioned using a wireless network 234, a wired network 236, they may be provisioned in real time as indicated by block 238, they may be saved and provisioned later (such as asynchronously) 240, or they may be provisioned in other ways 242.
[0073] [0073] The data evaluation layer 104 then evaluates the derived data against the reference data, to provide information for each pillar. This is indicated by block 244 in Figure 3. Data can be evaluated using compare 246, rank 248, or using other mechanisms 250.
[0074] [0074] In one embodiment, the comparison components 150 compare the derived data 120 for the operator 101 with the data from
[0075] [0075] Also, in the embodiment shown in Figure 4, the reference data sets 156 illustratively include context data 260. The context data may define the context within which the reference data was collected, such as the particular machine, machine configuration, crop type, geographic location, climate, plant status.
[0076] [0076] It will be appreciated that the reference data in the reference data store 114 can be captured and indexed in a wide variety of different ways. In one embodiment, raw CAN data 142 can be stored along with derived data 120, rating values 122, user preferences 158, pillar scores 124, context data, and recommendations. Data can be indexed by operator, by machine and machine head identifier, by farm, by field, by crop type, by machine state (ie the state on the machine when the information was collected, eg idle , idle while unloading, waiting to unload, harvesting, harvesting while unloading, field transport, road transport, return at ends, etc.), by settings state (i.e., machine adjustment settings including fractionation setting, adjustment of drop spreaders, etc.), and by configuration state (ie machine hardware configuration). These can be indexed in other ways as well.
[0077] [0077] Since the evaluation layer 104 performs the comparison with the reference data and classifies a measure of that comparison using fuzzy logic heuristics, the evaluation values 122 represent the classification results and are provided to the score generation layer pillar 106. This is indicated by block 270 in Figure 3. The pillar score generation layer 106 then generates a pillar score for each performance pillar (and the pillar supporting the logistics), based on the plurality of assessment values 122. This is indicated by block 272 in Figure 3.
[0078] [0078] Pillar scores can be generated by combining assessment values for each individual pillar, and weighting and scaling them.
[0079] [0079] The pillar scores 124 are then provided to the pillar score aggregation layer 108. This is indicated by block 282 in Figure 3. The report generator component 188 then generates the operator performance reports 110 based on the pillar scores , composite scores, underlying data, user preferences, context data and recommendations, etc. Generating the operation performance report 110 and control data 112 is indicated by block 284. Doing this by aggregating the pillar scores is indicated by block 286, generating the composite score is indicated by block 288, generating actionable recommendations is indicated by block 290 and generating and feeding back control data 112 is indicated by block 292.
[0080] [0080] Before discussing a more detailed implementation, the operation of the recommendation tool 184 when generating recommendations will be described. Figure 4A is a flowchart showing an embodiment of this.
[0081] [0081] Figure 4A shows a flowchart illustrating a modality of the operation of the recommendation tool 184 in Figure 2. The recommendation tool 184 first receives the pillar scores 124 of
[0082] [0082] The recommendation tool 184 identifies symptoms that are triggered in the specialized system logic, based on all the information received. This is indicated by block 259 shown in Figure 4A.
[0083] [0083] Specialized system logic then diagnoses various opportunities to improve performance, based on the triggered symptoms. The diagnosis will illustratively identify areas where recommendations can be useful to improve performance. This is indicated by block 261 in Figure 4A.
[0084] [0084] The recommendation tool 184 then accesses the expert system, logic-based rules 185, to generate recommendations. This is indicated by block 263. Rules 185 operate illustratively to generate recommendations based on diagnosis, context information, and any other desired information.
[0085] Recommendation tool 184 then issues recommendations as indicated by block 265. Recommendations may be issued to farm managers or other persons as indicated by block 267. These may be issued on demand as indicated by block 269. These may be issued intermittently or on a periodic basis (eg, daily, weekly, etc.) as indicated by block 271, or they may be issued in other ways as well, as indicated by block 273.
[0086] [0086] Figures 5A-5G show a more detailed implementation of architecture 100, in which machine 102 is a combine harvester. Figures 5A-5G each show a processing channel in architecture 100 for
[0087] [0087] Figure 5A shows a processing channel in architecture 100 that can be used to generate the productivity pillar score. Some of the items shown in Figure 5A are similar to those shown in Figure 2, and are similarly numbered. In the embodiment shown in Figure 5A, machine sensors 130-132 in the raw data sensing layer 116 illustratively include a vehicle speed sensor 300, a machine setup identifier 302, and a crop sensor, such as a flow sensor. of mass 306 which measures the mass flow of the product through the machine 102. The components in the derived data computing layer 118 illustratively include components for generating derived data, such as a throughput computing component 308 that calculates the throughput that indicates the Total grain productivity of machine 102. This can be tons per hour, tons per hectare or other units or a combination of such metrics. These also include a windowing component 314 which divides the data into time windows or time frames and provides them to the data evaluation layer 104.
[0088] [0088] The data evaluation layer 104 illustratively includes a grain yield fuzzy logic evaluation mechanism 317 that not only compares the output of the derived data computation layer 118 with the various reference data sets 156 in the storage of reference data 114, as well as rank a measure of that comparison. In one embodiment, the data evaluation layer 104 is illustratively a dimensionless number in a predefined range that indicates whether the operator performed in a good, average, or poor range with respect to the reference data to which it was compared. Again, as mentioned above, the good, average or poor categories are only
[0089] [0089] Figure 5A shows that the pillar score generation layer 106 illustratively includes a grain yield metric generator comprising the yield score generator 164. The yield score generator 164 receives the dimensionless output from the evaluation layer of data 104 and generates a performance score 124 based on the input. The productivity score is indicative of operator 101's productivity performance, based on current data. This information is provided to layer 108.
[0090] [0090] Figure 5B shows a modality of a processing channel in architecture 100 that can be used to generate the logistics support pillar score. Some of the items shown in Figure 5B are similar to those shown in Figure 2, and are numbered similarly. Figure 5B shows that layer 116 includes a time sensor 318 that simply measures how long the machine 102 remains running. It also includes a state data machine 320 that identifies when the machine 102 is in each of several different states. A vehicle speed sensor 300 is also shown, although it is already described with respect to Figure 5A. This can also be a separate vehicle speed sensor. Derived data computing layer 118 illustratively includes machine state determination component 322. Based on the machine state data received by sensor 320, machine state determination component 322 identifies the particular machine state in which machine 102 resides at any given time. Machine state can include idle, harvesting, harvesting while unloading, among a wide variety of others.
[0091] [0091] The components in the derived data computation layer 118 also illustratively include a plurality of
[0092] [0092] The outputs of components 324 and 340 are provided to fuzzy logic components 344 and 350 that compare the data provided by components 324 and 340 with reference data for productive time and idle time and evaluate them against the reference data. Again, in one embodiment, the output of fuzzy logic components is a dimensionless value within a predetermined range, which indicates whether the performance of operator 101 was good, average, or poor relative to the reference data. Layer 104 may include other components to generate other outputs, and may consider other information from layers 116 and 118 or from other sources.
[0093] [0093] The logistics metric generator 166 illustratively computes a logistics metric, in the modality shown in Figure 5B, based on all illustrated inputs. The logistics metric is a measure of the operator's logistical performance based on various comparisons with reference datasets, and it can be based on other things as well.
[0094] [0094] Figure 5C shows a block diagram of an implementation of a computing channel in architecture 100 to calculate the fuel economy performance pillar score. In the embodiment shown in Figure 5C, layer 116 illustratively includes a grain yield sensor 352 (or calculator) that senses (or calculates) grain yield for the combine (e.g., machine 102). This can be the same as component 308 in Figure 5A, or different. This can provide an output indicative of grain yield in a variety of different measures or units. This may also include a
[0095] [0095] Layer 118 includes component 360 which calculates a crop fuel efficiency ratio for crop states and component 362 calculates a non-productive fuel efficiency ratio for non-productive states.
[0096] [0096] The 382 and 384 components fractionate the data from the 360 and 362 components into discrete time structures. Layer 104 includes average distance components 386 and 388 that receive inputs from reference functions 390 and 392 and output an indication of the distance of the adjusted lines to output data by components 382 and 384 from reference functions 390 and 392.
[0097] [0097] Layer 104 illustratively includes a harvest fuel efficiency estimator 420, and a non-productive fuel efficiency estimator 422. Component 420 receives the output of component 386 (and possibly other information) and compares it with data from benchmark, evaluates the measure of that comparison and outputs a value that is indicative of operator 101's performance in terms of crop fuel efficiency. The 422 component does the same thing for non-productive fuel efficiency.
[0098] [0098] Layer 106 in Figure 5C illustratively includes a fuel economy metric generator as the fuel economy score generator 168 (shown in Figure 2). This receives inputs from components 420 and 422 and can also receive other inputs and generates a fuel economy pillar score for operator 101. A
[0099] [0099] Figure 5D shows a one-channel computing modality in the architecture 100 shown in Figure 2 to calculate the material loss performance pillar score. It can be seen that the material loss score generator 170 (of Figure 2) comprises the grain loss metric generator 170 shown in Figure 5D. In the embodiment shown in Figure 5D, layer 116 includes a left-hand brake loss sensor component 426 that senses brake loss and calculates a percentage of total brake loss. It also includes separator loss sensor 436 which senses separator loss and computes a total percentage of separator loss, a chaff volume sensor 446 which senses a chaff volume, and mass flow sensor 448. The flow sensor mass 448 can be the same as the server 306 in Figure 5A, or different.
[00100] [00100] Windowing components 451, 453 and 455 receive inputs from components 426, 436 and 448 and fractionate them into discrete time windows. These signals can be filtered and are provided to layer 104. Data evaluation layer 104 illustratively includes brake total loss estimator 452, separator total loss estimator 456 and chaff estimator 460.
[00101] [00101] The total brake loss estimator 452 illustratively comprises a fuzzy logic component that receives the total brake loss from the 451 component in layer 118 and compares with the total brake loss reference data from the data store of reference 114. This then evaluates the measure of that comparison to provide a dimensionless value indicative of operator 101's performance in terms of total brake loss and is classified as good, medium or poor.
[00102] [00102] Similarly, the total separator loss estimator 456 each comprises a fuzzy logic component that receives the total separator loss from the windowing component 453 and compares it with the reference data for the total separator loss, and then evaluates the measure of that comparison to determine whether the performance of operator 101 in terms of total separator loss is classified as good, average, or poor.
[00103] [00103] The chaff estimator 460 is illustratively a fuzzy logic component that receives an input from component 455, which is indicative of the volume of chaff and perhaps productivity. It then compares those items with chaff reference data in the reference data store 114 and ranks the measure of that comparison into good, average, or poor rank. Then, component 460 outputs a dimensionless value indicative of the performance of operator 101 in terms of evaluating chaff to be good, average or poor.
[00104] [00104] It can be seen from Figure 5D that, in one embodiment, all evaluator components 452, 456, and 460 receive an input from crop type component 450. Crop type component 450 illustratively reports components 452, 456 and 460 of the type of crop currently being harvested. Then, rater components 452, 456, and 460 can take this into account when making comparisons and rankings relative to the reference data.
[00105] [00105] The grain loss metric generator 170 takes inputs from the various evaluator components in layer 104 and aggregates those values and computes a performance pillar score for material loss. In doing so, material loss score generator 170 illustratively considers user preferences 468 that are provided regarding material loss. These can be provided in terms of a total percentage, or otherwise. These illustratively indicate the importance the user places on the various aspects of this particular performance pillar. THE
[00106] [00106] Figure 5E is a more detailed block diagram showing a modality of a computation channel in architecture 100 to obtain a performance pillar score for material quality. Thus, it can be seen that the material quality score generator 172 shown in Figure 2 comprises the grain/waste quality metric generator 172 shown in Figure 5E. Figure 5E shows that, in one embodiment, the raw data sensing layer 116 includes sensor 470 which senses material types in the grain elevator. Sensor 470 illustratively senses the volume of material, other than grain (such as straw and cobs). The damaged crop sensor 480 illustratively senses the percentage of material that is damaged (such as broken, crushed and cracked).
[00107] [00107] The residue properties sensor 486 can sense various properties of the residue. Properties can be the same or different depending on which combine is configured for chopping or heaping.
[00108] [00108] Figure 5E shows that the derived data computation layer 118 illustratively includes components 472, 482 and 488 that filter the signals from sensors 470, 480 and 486. These can be time-window fractionation signals and calculate a representative value for each window or otherwise.
[00109] [00109] In the embodiment shown in Figure 5E, the data evaluation layer 104 illustratively includes a material other than the Grain Assessor 500, a Crop Damage Assessor 502 and a Residue Quality Assessor 506. It can be seen that the components 500, 502 and 508 can all be illustratively reported by user preferences with respect to grain quality limits or by reference data 450 for
[00110] [00110] In any case, the grain estimator 500 illustratively receives input from component 472 in the computing layer of derived data 118 and compares the filtered material different from the grain value, for light material, with corresponding reference data in the storage of reference data 114. This then classifies the result of that comparison into a good, middle, or poor class. The grade is then indicative of operator 101 performance in terms of material other than grain in the grain elevator whether good, medium or poor.
[00111] [00111] Crop Damage Assessor 502 receives input from component 482 in layer 118 which is indicative of a percentage of product in the grain elevator that is damaged. It compares that information with the corresponding reference data from the reference data store 114 and classifies the result of that comparison into a good, middle or poor class. This then provides a value indicative of the performance of the operator 101, in terms of whether the product in the grain elevator is good, average or poor.
[00112] [00112] The waste quality assessor 506 receives inputs from component 488 in layer 116 and 118 and compares those inputs with corresponding reference data in reference data store 114. It then ranks the result of that comparison into a good, average class or poor. So, this provides an output indicative of operator 101's performance, in terms of whether the quality of service is good, average or poor.
[00113] [00113] Grain/waste quality metric generator 172 takes inputs from the various components in layer 104 and uses them to calculate a grain/waste quality score for the material quality performance pillar. This score is indicative of the overall performance of operator 101 on operating machine 102 in terms of grain/waste quality. Scoring is illustratively provided to layer 108.
[00114] [00114] Figure 5F shows an embodiment of a processing channel in the architecture 100 shown in Figure 2 to calculate a tool energy utilization score for the energy utilization pillar in a combine. Then, energy utilization score generator 166 is shown in Figure 5F. In the embodiment shown in Figure 5F, the raw data sensing layer 116 illustratively includes tool speed sensor 510 and a tool load sensor 514. Derived data computing layer 118 illustratively includes a tool utilization component 516 that receives inputs from sensors 510 and 514 and calculates tool usage (such as energy in kilowatts). Filtering component 518 filters the value from component 518. Windowing component 520 fractionates the output of filtering component 518 into discrete time windows.
[00115] [00115] The output of the windowing component 520 is provided to a layer 104 that includes the energy usage estimator of the tool 522. The energy usage estimator of the tool 522 is illustratively a fuzzy logic component that receives the output of the component of windowing 520 in layer 118 and compares it to the energy utilization reference data from tool 523 in reference data store 114. This then classifies the result of that comparison into a good, average, or poor class. Then, the output of component 522 is a dimensionless value that indicates whether the performance of operator 101 in terms of tool energy utilization is good, average, or poor.
[00116] [00116] Score generator 174 receives the output of rater 522 and calculates a performance pillar score for tool energy utilization. The output of component 174 is then a performance pillar score indicative of the overall performance of operator 101, operating machine 102 to be good, average or poor in terms of tool energy utilization. Scoring is illustratively provided to layer 108.
[00117] [00117] Figure 5G is a more detailed block diagram showing a modality of the architecture 100 shown in Figure 2, when generating the uptime summary. In the embodiment shown in Figure 5G, layer 116 includes machine data sensor 116. Machine data sensor 116 illustratively senses a particular machine state the machine 102 is in, and the amount of time it is in a given state. It can also sense other things.
[00118] [00118] Layer 118 illustratively includes a diagnostic trouble code (DTC) component 524 that generates various diagnostic trouble codes based on different occurrences sensed in the machine 102. These are temporarily stored in temporary storage 525. Count DTC 526 calculates a number of DTC occurrences per category, and the number and frequency of occurrence of various alarms and warnings indicated by machine data 116. By way of example, component 526 can calculate the number of times the deposit of the feeder remains clogged or the number of other alarms or warnings that indicate the machine 102 is bearing an unusually high amount of wear. Alarms and warnings can be event-based, time-based (such as how many separator hours the machine has used) or based on other things.
[00119] [00119] Layer 104 includes alert/warning evaluator 528 that compares various machine 102 information with reference data to generate information indicative of operator performance. The information is provided to the summary generator 176.
[00120] [00120] The uptime summary generator 176 at layer 106 receives the outputs of component 528 and uses them to generate uptime summary information indicative of the performance of the operator 101, in the operation of the machine 102, in terms of time of activity. Uptime summary information can be provided to layer 108 or used by
[00121] [00121] It will be noted that the present discussion describes data evaluation using fuzzy logic. However, this is exemplary only and a variety of other assessment mechanisms can be used instead. For example, data can be evaluated using clustering and cluster analysis, neural networks, supervised or unsupervised learning techniques, support vector machines, Bayesian methods, decision trees, Hidden Markov models, among others. Additionally, Figures 6A-6F below describe how to set up and use a fuzzy logic evaluator to generate recommendations. This is just one example of how collected data can be evaluated to determine if it satisfies any of a variety of actionable conditions for which a recommendation can be generated. The other valuation techniques can be used to determine this as well.
[00122] [00122] Figure 6A is a flowchart illustrating a modality of how recommendation rules 185 can be configured, and thus these can be used by recommendation tool 184 when generating recommendations 192. These rules represent actionable conditions. The collected and sensed data are evaluated against those conditions to verify that the conditions are met and, if so, the degree of execution. When some of the conditions are satisfied, corresponding recommendations can be issued. The overall operation of setting the rules will first be described with respect to Figure 6A and then a number of examples will be provided in order to reinforce understanding.
[00123] [00123] According to another modality, the rules that should be used by the recommendation tool 184 are enumerated first. This is indicated by block 600 in Figure 6A. Rules can be a wide variety of different types of rules, and can vary in number, from a few rules to tens or hundreds or even thousands of rules. THE
[00124] [00124] Once the rules are enumerated, one of the rules is selected. This is indicated by block 602. For the selected rule, a number of symptoms that must be considered for the rule is selected. The symptoms to be considered may be obtained from substantially any of the levels set forth in Figure 1, and for which examples have been provided in Figures 5A-5G. Then, these may include, for example, CAN data 142, derived data 120, assessment values 122, pillar scores 124, composite scores 190, or a host of other data. Selecting the systems to be considered by the selected rule is indicated by block 604 in Figure 6A.
[00125] [00125] When selecting those symptoms, these can be obtained from different levels of aggregation as indicated by block 606. These can be reflected by an absolute number 608 or by a comparison with the reference data 156. These can be compared to user preferences 158 or other information. This type of relative information is indicated by block 610 in Figure 6A. Of course, symptoms can be other items as well, and this is indicated by block 612.
[00126] [00126] Next, for each symptom selected for the current rule, a fuzzy set can be defined to identify a degree of rule compliance, based on the various parameters. This is indicated by block 614.
[00127] [00127] A rule priority is then assigned to the selected rule. For example, some rules may be more important than others in different applications. Then, different rule priorities can be assigned to reflect the rule's importance in the given application. The rule priority can be an absolute number or it can be a category (such as high, medium, low, etc.). Assigning the rule priority is indicated by block 616 in Figure 6A.
[00128] [00128] Finally, one or more concrete recommendations are defined for the selected rule. These are recommendations that will be issued to the user when the rule is triggered. This is indicated by block 618 in Figure 6A. Recommendations can take a wide variety of different forms. For example, these might be fixed recommendations (such as "drive 3 km per hour faster"). This is indicated by block 620. These can be variable 622 recommendations, which vary based on a wide variety of different things. These may vary based on degree of fill, these may vary based on a combination of items, or they may vary based on a specified function.
[00129] [00129] In an exemplary modality, the process established in Figure 6A is repeated for each enumerated rule. This is indicated by block 630 in Figure 6A. This completes the rules setup.
[00130] [00130] A number of examples will now be provided. The following six rules will be discussed for example purposes only. It will be noted that many additional rules or different rules could also be enumerated. Rule 1. Ground speed too slow for production. Rule 2. Driving too slowly while discharging while moving. Rule 3. Drive slower due to material handling disturbance and/or threat of clogging. Rule 4. Reduced cultivation and cannot drive faster. Rule 5. Excessive downtime due to grain logistics. Rule 6. Frequent clogging of feeder tank.
[00131] [00131] The symptoms that affect each rule can be selected to focus on multiple pillars, or on multiple other sensed or derived inputs. As an example, rule 1 above focuses on the grain yield pillar. Rule 2 focuses on both grain yield and logistical pillars. So the focus of a given rule can be a single pillar, combinations of pillars, combinations of sensed or derived or individual parameters, or a wide variety of other things.
[00132] [00132] Selecting a set of symptoms that should be considered when determining whether a rule triggers will now be described for Rule 1. Symptoms may include, for example, a consideration of grain yield, as measured, with a reference (such as as a yield reference value for the same crop and under the same conditions) is below a threshold level. It can also be considered whether the available tool energy is fully utilized and whether the machine is limited in losses (which can be indicated when the loss pillar score is high). Average harvest speed can also be considered. For example, the recommendation tool 184 can consider whether the average speed is below a reasonable upper limit (such that the machine could actually go faster and still run with ride comfort, etc.).
[00133] [00133] For each of these symptoms, a fuzzy set can be defined, which applies to the rule. In one modality, the fuzzy set is defined by an endpoint function on a graph that plots the degree of compliance against a measure of the parameter (or symptom). Figure 6B, for example, shows a graph of the degree of compliance plotted against a grain yield pillar score, compared to a reference group. Then, the percentage on the x-axis of the graph shown in Figure 6B indicates how the grain productivity score compares to the reference group.
[00134] [00134] Figure 6C plots the degree of compliance as a function of
[00135] [00135] Having defined a fuzzy set for each parameter corresponding to rule 1, a priority is then assigned to rule 1. In a modality, the priority can be high, medium or low, based on the importance of the rule in the given application. Rule priority can be set in other ways as well.
[00136] [00136] A concrete recommendation for the rule is defined below.
[00137] [00137] The same process is then performed with respect to rules 2-6 above. For example, for rule 2, a consideration might be the ratio of productivity (in tons per hour) while harvest versus productivity while harvest and unloading are below average (relative to a reference group on the same crop, under the same conditions). Another consideration might be whether the vehicle speed (such as an absolute number in kilometers per hour) is in a given range (such as a range of 0.1-6 kilometers per hour) to ensure that the rule does not go off if the speed is already high. The degree of fulfillment of functions is then defined for each parameter, the rule is assigned a priority and the recommendation is defined. The recommendation for rule 2 might be, for example, "accelerate from y" where y is fixed or any form of parameter-dependent or parameter-independent function, or where y is scaled based on rule filling, etc.
[00138] [00138] For rule 3 above, some symptoms to consider may include if the rate of change and/or deviation of change in rotor drive pressure is above normal. This can provide content for a report leading to field conditions. Fulfillment of functions is defined, the rule is assigned a priority, and a recommendation is defined. For some rules (such as rule 3) there may be no defined recommendation. This rule can only trigger an entry in a report to display context. This may allow a farm manager or other person to interpret other results in the report appropriately. As an example, the manager may be able to tell that the operator was driving slower due to a disturbance in the material flow. This may be due to field conditions rather than the operator. Then, this context information is provided in the report, when this rule triggers, however, no recommendation is issued.
[00139] [00139] For rule 4 above, the parameters that are considered
[00140] [00140] For rule 5, some of the parameters to consider might be, if after a field is completed, the logistic score is below 90%. Another parameter can include whether, after a field is completed, the percentage of idle time with a full grain tank (or one that is nearly full) is above normal by a threshold amount relative to the reference value in the same crop and under the same conditions. The degree of compliance can be defined for the rule, and a priority can be assigned. The recommendation might be to investigate cultivation logistics.
[00141] [00141] For rule 6 above, some of the parameters to consider may be, if certain trouble codes have been generated, which indicate that the feeder shelter is clogged. This can be indicated, for example, by a count of the number of such feeder codes per unit of time. If this ratio is above a predefined threshold or is high relative to a reference group, this can cause the rule to fire. The degree of compliance can be set for the rule in other ways, and a priority is assigned to the rule. The recommendation might be to investigate the configuration and adjustments of the header, because something is wrong, what is
[00142] [00142] Figure 6F is a flowchart illustrating a mode of operation of the recommendation tool 184 in determining which rules are triggered, and when to present recommendations. The recommendation tool 184 first receives all selected symptoms or parameters, for all the various rules, and so can be evaluated. This is indicated by block 632 in Figure 6F.
[00143] [00143] The recommendation tool 184 then determines if it is time to check if some of the rules are triggered. This is indicated by block 634. This can be done in a wide variety of different ways. For example, recommendation tool 184 can periodically evaluate rules. Additionally, rule evaluation can be based on sensed conditions. For example, if one rule is triggered, then other related rules can be immediately evaluated. In addition, if certain parameters or values are sensed or derived or otherwise obtained, this can cause a rule or a subset of rules to be evaluated more frequently. In either case, the recommendation tool 184 determines whether it is time to evaluate the rules.
[00144] [00144] The recommendation tool 184 then determines the degree of compliance for each of the rules it is evaluating. This is indicated by block 636. This can also be done in a wide variety of different ways. As an example, for rule 1, the degree of compliance for each parameter can be calculated. Then, the total degree of compliance for the entire rule can be generated for the degrees of compliance for each parameter. As an example, the degree of compliance for the total rule can be the same as the degree of compliance for the weaker parameter. In another modality, the degree of compliance of the total rule can be based on a combination of degrees of compliance for each of the
[00145] [00145] Once the degree of compliance with the rules is identified, the recommendation tool 184 determines which specific recommendations to issue to the operator. This is indicated by block 638 in Figure 6F. Determining which specific recommendations to issue can also be based on a variety of different considerations.
[00146] [00146] For example, if a recommendation was only recently issued, the recommendation tool 184 can bypass that recommendation for a predetermined period of time. This can be done in such a way that recommendation tool 184 is not repeatedly issuing the same recommendations too often. This is indicated by block 640 in Figure 6F.
[00147] [00147] Determining that a recommendation should be issued may also be based on the degree of compliance with your rule. This is indicated by block 642. For example, if a given rule has a very high degree of compliance, its corresponding recommendation can be issued before the recommendation corresponding to a rule that has a relatively low degree of compliance.
[00148] [00148] Determining whether to issue a recommendation may also be based on the priority assigned to the corresponding rule. This is indicated by block 644. For example, if a plurality of recommendations is being issued for high priority rules, then recommendations for medium or low priority rules can be kept until the high priority rules no longer fire. This is just an example.
[00149] [00149] Determining which recommendations to provide can be based on combinations of the rule's priority, its degree of compliance, the time since the recommendation was last provided, or combinations of other things as well. This is indicated by block 646.
[00150] [00150] In addition, it should be noted that the recommendation tool 184 can be configured to provide only a target number of recommendations at any given time. Then, the highest priority recommendations can be issued in descending order until the target number of recommendations is reached. This is indicated by block 648 in Figure 6F. Recommendation tool 184 can determine which recommendations to issue in other ways as well. This is indicated by the block
[00151] [00151] Additionally, in one modality, conflicting recommendations are identified and conflicts are resolved before the recommendations are issued. Conflicts can be resolved in a wide variety of different ways. For example, when recommendations are prioritized, the conflict can be resolved based on priority. Priority can be assigned informally, heuristically, based on weights or key information, or otherwise. Conflicts can also be resolved using a predetermined recommendation hierarchy that establishes a recommendation precedence. Conflicts can be resolved by accessing a set of conflict resolution rules. Rules can be static, context-dependent, or dynamic. Conflicts can be resolved in other ways too.
[00152] [00152] Once the recommendations that must be issued are identified, the recommendation tool 184 issues the identified recommendations. This is indicated by block 652 in Figure 6F.
[00153] [00153] It should also be noted that the parameters considered for each rule need not be those generated from complex computation. Instead, these can be obtained from all levels of data aggregation in Figure 1. So some can be defined in engineering units rather than other measures. As an example, the parameters considered for rule 1 can be grain mass flow in
[00154] [00154] Figure 6G shows one embodiment of an exemplary reporting format for an operator performance report 110. The reporting format shown in Figure 6G is just an example, and is indicated by the number 530. Also, it will be noted that each one of the sections in Figure 6G can be modified, either by the user, an administrator or other personnel, to show different information as desired.
[00155] [00155] Report format 530 may illustratively include a section 532 adapted by the user or the manufacturer. This may include machine and operator identifier section 534 which identifies a particular operator 101 and the particular machine 102 that the operator is operating. This can include a 536 date range section that shows the date range for the report, and a 538 report frequency indicator that indicates how often the report is generated. In the mode shown in Figure 6G, the 530 report format is only reporting information for three of the five
[00156] [00156] Figure 6G shows that the 530 report format includes an overview section 540. The overview section 540 illustratively includes a set of performance pillar score indicators 542, 544, and
[00157] [00157] In the modality shown in Figure 6G, the overview section 540 includes a set of time indicators 560 and 562 that indicate the operating time of the components that are considered of interest to the user. In one modality, for example, the hour indicator 560 indicates a number of tool hours that the operator 101 has used for the information in the current report. Other hour indicators can also be used.
[00158] [00158] Figure 6G also shows that, in a modality, for each pillar score shown in overview section 540, a more detailed section is also provided. For example, Figure 6G includes a productivity detail section 564, a quality detail section 566, and a fuel economy detail section 568.
[00159] [00159] Productivity detail section 564 includes detailed information about the various items sensed or computed in generating the
[00160] [00160] In the modality shown in Figure 6G, the quality detail section 566 illustratively includes more detailed information that was used to generate the quality performance pillar score. For example, this may include detailed information regarding total separator loss, brake loss, grain quality, straw quality and chaff volume. It may also illustratively include image sections that show photographic images taken by the operator or otherwise. For example, image section 570 shows images that were taken that refer to separator and brake loss. Images section 572 includes images that were taken and are relevant to grain quality.
[00161] [00161] In the modality shown in Figure 6G, the fuel economy detail section 568 includes detailed information that was used in generating the fuel economy performance pillar score shown in the overview section 540. Therefore, this may include such things like total fuel consumption during harvesting, during transport through the field, during road travel and non-productive fuel consumption. Of course, this can also include other information. It will be
[00162] [00162] In another modality, the performance results can also be provided in a graph on a field map generated, for example, from a satellite image of the field. For example, a GPS sensor (or other position sensor) can sense the location of machine 102 as other sensors are sensing things and how data is being calculated and derived. Mapping components can correlate sensed location with sensed and calculated data. Data can then be plotted using a geographic representation of the field for which data was gathered and collected. Plotted results can include each metric (the five pillar scores) and the composite score. The plot can show (at the same time or selectively) other information as well. This will then show how the operator performed at different locations in the field, for different data.
[00163] [00163] Figures 6H-6T show a plurality of different examples of user interface views that can be generated by report generator component 188. As discussed above, it will be appreciated that user interface views can be generated and provided as a user experience for an operator in an operator station of the mobile machine 102. The operator can then use the information in the displays to change machine operation or to modify settings on the machine, or perform other tasks. In addition, the operator can see, in near real time, how he is performing against reference groups. Reference groups can be historical data for the operator itself, other operators in the fleet, other high performance operators using a similar machine in a similar geographic region, on a similar crop, or in other groups of
[00164] [00164] Figure 6H shows an example of a display screen 701. The display screen 701 may include an introductory text portion 703 that contains introductory text. It may also include a plurality of preference configuration portions (or cultivation configuration portions) 705-717. Each portion 705, 717 will illustratively have a title identifier that identifies a title and one or more sets of configuration features (generally shown at 719). Setup functionality allows the operator to change machine or operational settings for machine 102. Setup 719 functionality types may vary with each section 705-717 based on the particular setup being made.
[00165] [00165] For example, the configuration functionality could be a 721 metadata value mechanism that allows the user to enter a value. The functionality can be an option selection mechanism 723 that allows the user to identify a setting or a group of settings by choosing an option. Functionality may include a 725 on/off mechanism that allows the user to turn a feature on or off. The configuration functionality allows the user to adjust a value usually indicated in a 731 value display section by actuating more and less actuators or sliding a slider along a scale
[00166] [00166] In the example shown in Figure 6H, the user can return to a previous screen (such as a log screen or other screen) or by actuating a "return" actuator 737, and the user can advance to a next screen by actuating a "next" actuator 739. The particular screen that is displayed in response to the operator actuating the "return" actuator 737 or the "next" actuator 739 can be controlled by the report generator component 188, based on identity or role of user. For example, if the user is registered as an operator, then report generator component 188 can generate a set of operator UI views. On the other hand, if the user is registered using a different identity (such as a manager identity), then the 188 report generator component can generate a set of manager user interface views.
[00167] [00167] Figures 6I-6M show examples of user interface views that can be generated for an operator. For example, if the user presses the "next" button 739 on UI display 701, report builder component 188 can generate an operator runtime UI display such as the display 741 shown in Figure 6I. In the example shown in Figure 6I, view 741 includes heading section 699, total performance score display engine 743 along with a set of individual performance pillar score display engines 745, 747, 749, and 751. specific example shown in Figure 6I (and this is just an example) the field identifier section 699 identifies the field the operator is in
[00168] [00168] View 741 shows that each engine also includes a comparison. It displays an indicator that marks the individual operator score along the 753 display meter section on one side of the 753 display meter section, and displays an indicator that marks the reference group score on the opposite side. For example, in Figure 6I, the individual operator score is marked by display element 757 (which, in the illustrated example, is a hatch mark on one side of display meter section 753 but could also be another indicator) and the reference group score is marked by display element 759 (which again is shown as a hatched mark, but this could be another indicator). Then, for the overall performance score and each of the individual performance pillar scores, the operator can easily check not only his own score in real time, but the operator can see how his own score compares to the group of reference selected. It will be noted that, in an example,
[00169] [00169] In an example, the user can quickly change the displayed reference group by selecting one of the 769 and 771 reference group selectors. When the user actuates the 769 reference group selector, the 759 reference group indicator stops each of the performance display metrics is the average for the current operator. For example, when the user actuates the 769 user input mechanism, the 747 fuel economy display engine will display the user's current score (represented by the 757 display element) compared to the user's average fuel economy score (as indicated by display element 759). Similarly, when the user actuates mechanism 771, report generator component 188 switches the reference group such that it displays operator score versus fleet average scores for other operators in the fleet. It will be noted, however, that fleet scores may only be for top performing operators, or for other groups within the fleet. These are just examples.
[00170] [00170] Figure 6I also shows that where a performance pillar score is based on a plurality of different measured metrics, the values for those metrics (which are used to constitute the total performance pillar score) may also be displayed. For example, it can be seen in Figure 6I that the 745 grain yield display engine indicates that the total grain yield pillar score is based.
[00171] [00171] Figure 6I also shows that, in an example, the report generator component 188 may also show additional information. For example, the display meter section 753 and the digital display readout section 755 can show the instantaneous value for a given metric, but the display engine can show an average over a recent time period as well. For example, the 747 fuel economy display engine can display, in the 753 display gauge section and the 755 digital display readout section, an instantaneous fuel economy score in liters per ton of harvested product. However, this can also include an average or aggregated 765 score display section that displays the average (over some predetermined period of time) for the fuel economy score, or an aggregated total fuel economy score for the section integer, or for this field, or some other characteristic for this operator. The same is shown with respect to the energy utilization display mechanism 749. It can be seen that the display meter section 753 and the digital display readout section 755 can display an instantaneous value for energy utilization. However, display section 767 may display an average energy usage over a predetermined period of time, for this field, for this station, etc.
[00172] [00172] Figure 6J shows another example of a 741 UI display. A number of items in Figure 6J are similar to those shown in Figure 6I, and these are numbered similarly.
[00173] [00173] Figure 6K shows another example of a 777 field report user interface display that can be generated for an operator. Display 777 illustratively displays information about operator performance in a given field. Again, the field is identified by field identifier 699. Also, the total performance score as well as the performance pillar scores (shown in Figures 6I and 6J) are also displayed in Figure 6K. The 777 Field Report Display can be displayed after the operator is finished with the field, or while operating within the field. In the example shown in Figure 6K, the operator has finished harvesting the field and therefore the information on the 777 field report display shows the results for the entire field. Again, this includes the display mechanism for
[00174] [00174] The 777 display also illustratively includes an alerts and notifications display section 779 as well as an uptime summary display section 781. The section 779 allows the user to view (and scroll through) a list of alerts and notifications that were generated during field harvesting. Section 779 includes a pillar identifier 783 that identifies the particular performance pillar to which the alert or notification was associated. It also includes a description section 785 that describes the alert or notification and includes a date identifier 787 that indicates when the alert or notification was generated. A drill down mechanism 789 can be actuated by the user to drill down to verify additional details about the alert or notification. When the user does this, the report generator component 188 retrieves the previously recorded alert or notification details and displays them to the user.
[00175] [00175] Uptime summary display section 781 displays information regarding the supporting pillars. This includes time sections that display the 791 tool time and the 793 separator time that were used in harvesting the field. It also includes a 795 logistics section and a 797 diagnostic trouble code (DTC) section. The 795 logistics section includes a 799 drill-down engine that allows the user to view additional details about logistics information. The DTC section 797 also includes a 901 drilldown indicator that allows the user to view additional information regarding diagnostic trouble codes that were generated during field harvesting.
[00176] [00176] Figure 6L shows an example of a 903 UI display that can be generated when the user drills down to information
[00177] [00177] Figure 6M is an example of a UI view 915 that can be generated when the user actuates the drill down engine 901 in Figure 6K. The 915 UI display includes a 917 diagnostic trouble code numeric identifier for each DTC that was generated while the user was harvesting in the field. This may also include a DTC title 919 and a description 921, which serve to further identify the particular diagnostic trouble code. Again, the list of diagnostic trouble codes can be scrolled using any suitable user input mechanism, such as a 923 scroll bar.
[00178] [00178] Returning to the UI view shown in Figure 6H, the user is now supposed to be registered as a manager. If the user acts the "next" mechanism 739, the user can be navigated by the report generator component 188 to a set of UI views. Figure 6N shows an example of a view of a manager dashboard 925. The manager dashboard 925 includes a field section 927 and an operators section 919. Field section 927 includes a set of navigable links 931 , each corresponding to a separate field that is
[00179] [00179] Operator display section 929 includes a set of navigable links 939, each of which corresponds to a different operator. Each navigable link illustratively includes a 941 time-based graph section and a 943 numeric indicator section. The 941 time-based graph section shows one or more performance pillar scores for the identified operator, over a period of recent time. Numeric indicator section 943 shows a current value for that performance pillar score, for the identified operator. In one example, the manager can select which performance pillar scores to display for each operator and for each field. In another example, the manager can select multiple different performance pillar scores to display for each operator and for each field on the 925 dashboard. When the manager actuates one of the 939 links, the manager is navigated to a more detailed view of information corresponding to the identified operator.
[00180] [00180] Figure 6N also shows that, in an example, the dashboard view 925 includes a 945 alerts section. The 945 alerts section lists alerts that were generated on a current day as well as those in the recent past. Each alert can illustratively have a title that indicates the particular performance pillar it affects, as well as an identifier of
[00181] [00181] Figure 6O shows a 955 manager field report UI display. The 955 display can be generated, for example, when the manager has actuated the 931 field display element corresponding to the "Return 40" field. When the manager does this, the report generator component 188 illustratively generates a more detailed display showing information corresponding to that field, and as indicated by display 955. It can be seen that some of the items in Figure 6O are similar to those shown in Figure 6K (which is shown to the operator, as opposed to the manager) and these are numbered similarly. However, the 955 display also illustratively includes a 957 map actuator and a set of 959 operator actuators. Each of the 959 operator actuators identifies a particular operator and assigns one or more performance pillar scores (or global score) to that operator, as well as an uptime score for that operator. 959 elements are actable elements such that when the manager acts on one of them, the manager is navigated to more detailed information corresponding to that operator's performance in the identified field. Also, the integer operator display field 961 is also associated with an actuatable link. When the manager acts on that link, the manager will be navigated to a more detailed view, showing more detailed information for all operators who operated in the identified field.
[00182] [00182] The 955 display also, in one example, includes a 947 sliding actuator. The 947 sliding actuator can be actuated by the manager in order
[00183] [00183] Figure 6P shows an example where the manager has actuated the 947 engine. It can be seen in Figure 6P that a 949 panel has now slipped into the report view of the manager field (shown in Figure 6O). Panel 949 illustratively includes a 951 field actuator and a 953 operator actuator. It can be seen that the manager acted on the 951 field actuator. Then the report generator component 188 generates a list of other fields generally shown in 956. Each item in the list illustratively includes an identification section that identifies the field, an indicator that the field is currently active or has been active at some earlier date, and an overall performance score for all operators who have worked on that field. By acting on one or more list items on list 956, the report generator component 188 navigates the manager to more detailed information corresponding to that field.
[00184] [00184] If, on the other hand, the manager presses the operator button 953, then a list of operators is displayed. The list of operators will include an identification portion, identifying the operator, whether the operator is currently working and a global score associated with that operator. Again, if the manager acts on an operator list item, the manager is navigated to a more detailed view, showing more detailed information for the corresponding operator.
[00185] [00185] As an example, Figure 6Q shows that, in an example, the manager acted on the link corresponding to the operator's display field
[00186] [00186] Returning again to the view shown in Figure 6O, the manager can actuate the 957 data maps engine. When the manager does this, the 188 report generator component illustratively generates a more detailed map view of the field. Figure 6R shows an example of this. It can be seen in Figure 6R that a geographic image of the "Return 40" field is generated and displayed generally at 967. Reporter component 188 correlates a given performance pillar metric given to the geographic locations in the field displayed at 967 and displays indications that indicate the value of the performance pillar metric, at that specific location. In the example shown in Figure 6R, the display includes a 969 performance pillar metric selector section. This allows the manager to select one of the performance pillar metrics to overlay on the geographic representation of the map shown at 967. It can be seen that the manager selected the selected metric (in this case the
[00187] [00187] Referring again to the display shown in Figure 6P, if the manager presses the operator button 953 and then selects an operator from the displayed list, the report generator component 188 generates a display showing more detailed information corresponding to the selected operator. The same is true if the manager acts one of the 959 operator display elements. Figure 6S shows an example of a 975 interface display that can be generated when the manager does this. Figure 6S shows some of the items that are similar to those shown to the manager in Figure 6O and those items are numbered similarly. However, rather than being aggregated data for a given field (as is the case with the information shown in Figure 6O) the information shown in Figure 6S is information for a specific operator (Nick). Then each of the 743-751 display engines shows the performance of the selected operator (Nick)
[00188] [00188] The 975 display also includes a historical data actuator
[00189] [00189] Figure 6T shows the 979 UI view. The 979 view illustratively includes a 981 metric selector panel that allows the manager to select one or more performance metrics which are then rolled into a historical view graph 983. Can It can be seen in the example shown in Figure 6T that the manager has selected grain yield, energy utilization and uptime performance metrics for display in display portion 983. Those items are separately displayed as indicated by visually distinguishable lines. Each row has an associated window displayed nearby (illustrated by the dashed area around each row) that indicates an accepted window for the corresponding metric. This allows the manager to quickly see if the particular metric has drifted outside the acceptable window.
[00190] [00190] In the example shown in Figure 6T, the 188 report generator component also shows a 985 summary dashboard. The 985 summary dashboard displays summary information for a selected time period. Summary information in the example shown in Figure 6T is shown for a selected day. For example, the manager can act the
[00191] [00191] In an example, when the manager actuates the period selector 989, a dropdown menu is displayed, the manager can illustratively select a week, two weeks, a month or a variety of other time periods. When a different type of period mechanism is displayed, the manager can select a different period of time in other ways as well. When this occurs, report generator component 188 displays time graph section 983 with information for the newly selected time period.
[00192] [00192] The user interface displays, with the user input mechanisms, relevant information, acting to surface relevant information to the various users of the information in real and near real time. This greatly enhances the machine's operation. Having access to the information, the various users can adjust machine operation, training or other parameters to obtain significant performance boosts. In addition, by overlaying relevant information faster, this improves computing system performance. This reduces the need for a user to query or otherwise navigate through the system to find relevant information. This reduces processing and utilization overhead.
[00193] [00193] Figure 7 shows that, in an example, the information used by the performance reporting architecture 100 can also be provided to a performance and financial analysis system for further analysis. Figure 7 is a block diagram showing an example of a performance and financial analysis system 660. System 660 may have access to data in data store 662. Data store 662 may, for example, store performance reports from operator 110, some of the underlying data used by architecture 100 (for example, data sensed or otherwise collected by architecture 100, reference data, or any of a wide variety of other information used in architecture 100. This data is indicated by 664. May include other data 666 as well. Also, in the example shown in Figure 7, system 660 may have access to reference data store 114 and recommendation tool 184. Additionally, it will be noted that system 660 may access other content 668, which may include, as examples, fuel price information indicative of fuel prices, labor and machine cost data, mapping components that can in mapping sensed or calculated data to a given location in a field, and a wide variety of other information.
[00194] [00194] Figure 7 shows that, in an example, system 660 generates UI views 670 with user input mechanisms 672 for interaction by user 674. User 674 can interact with user input mechanisms 672 to control and manipulate the system
[00195] [00195] System 660, in one example, includes performance space tool 676 and financial opportunity space tool 678. This may include processor 680, UI component 682, search tool 684, browser 686 and other items 688.
[00196] [00196] Performance opportunity space tool 676 may include benchmark calculator component 690, actual performance calculator component 692, opportunity space identifier component 694, performance savings component 696, and may include other items
[00197] [00197] Before describing the operation of the 660 system in more detail, a brief overview will first be provided. The performance opportunity space tool 676, in one example, uses the benchmark calculator component 690 to calculate a range of different benchmark performance values across a plurality of different performance categories. For example, it can calculate an optimal theoretical performance, across the categories, for each machine in the fleet being analyzed. This can be based on machine configuration, machine automation level, and some or all of the other information used by architecture 100, or other information (such as information obtained from content 668 using search tool 684 or browser
[00198] [00198] It can then be seen that opportunities are calculated using relative data rather than absolute data. Relative data consider conditions, geography, type of crop, etc., while absolute measures do not.
[00199] [00199] In an example, the same metrics are not used for
[00200] [00200] Figure 7A graphically illustrates a number of the items mentioned above. Figure 7A includes a graph 708 that plots both actual and theoretical performance distributions over a continuous financial and performance opportunity space, indicated by the x-axis of graph 708. Graph 708 graphically illustrates a 710 sustainable performance envelope that characterizes a sustainable performance for the operator population within the context of their cultivation and geography and other contextual information. For example, in certain geographies using certain machines, with certain operators and under certain circumstances (such as weather, terrain, etc.) it may only be possible to sustain performance within a given range. This is indicated by envelope 710.
[00201] [00201] The 712 distribution shows the performance distribution of all operators in a given fleet, through selected performance categories, where the performance of those operators lagged behind a
[00202] [00202] Figure 8 is a flowchart illustrating an example of the operation of system 660 in more detail. Figure 8 will be described with reference to Figures 7 and 7A. System 660 primarily receives information from reporting architecture 100 and may receive information from other sources as well. This is indicated by block 722 in Figure 8. As mentioned briefly above, this may include operator performance reports 110, other data security policy for architecture 100 (and indicated by 664), data from the data store reference 114 and other content 668.
[00203] [00203] The performance opportunity space tool 676 then identifies a performance opportunity space where performance improvement is possible. This is indicated by block 724 in Figure 8 and is
[00204] [00204] The financial opportunity space tool 678 then identifies a financial opportunity space where improvement is possible, based on the performance opportunity space. This is indicated by block 726 in Figure 8. This is described in more detail below with respect to Figures 9 and 11. Briefly, however, financial opportunity space tool 678 assigns financial values to the performance improvements that are identified in the space of performance opportunity. This then provides a financial savings output that identifies potential financial savings that can be achieved by improving performance.
[00205] [00205] System 660 may also illustratively ask recommendation tool 184 to generate recommendations to take advantage of the identified performance and financial opportunity. This is indicated by block 728 in Figure 8.
[00206] [00206] The 660 system then issues the performance and financial opportunities along with the recommendations. This is indicated by block 730. This can also take a wide variety of different forms. For example, these items might be issued during an agricultural season and reflect year-to-date opportunities and potential savings. This is indicated by block 732. This can be done at the end of an agricultural season and indicate end values 734. It can be provided with drill down functionality 736 such that the user 674 can review more detailed information corresponding to, for example, the individual operators, individual machines, certain times of the year, etc. It can also be provided in other modes 738.
[00207] [00207] Figure 9 is a flowchart illustrating an example of system 660 operation by identifying performance and financial opportunity spaces, in more detail. In the example shown in Figure 9, the performance opportunity space tool 676 first receives a set of category metrics identifying categories for which the performance and financial opportunity spaces should be identified. This is indicated by block 740 in Figure 9. These category metrics can be received in a variety of different ways. For example, category identifiers can be predefined that identify a set of predefined categories. These can also be user configurable categories such that the user can define their own categories. Of course, these can also be provided in other ways. Once the categories are identified, the 660 system provides indicative values of the performance and financial opportunity space, according to those categories.
[00208] [00208] Returning again to Figure 9, once the categories are identified, the performance opportunity space tool 676 receives the performance data for the fleet under analysis. This is indicated by block 742. The actual performance calculator component 692 then obtains actual performance values that quantify the actual performance in each of the categories. It can do this simply by accessing them if those values have already been calculated or they can calculate them if these are derived values that should already be derived from the data received by the actual performance calculator component 692. Get the actual performance values on each This category is indicated by block 744 in Figure 4. These identify how the various operators and machines in the fleet under review actually performed in terms of the specified categories.
[00209] [00209] The benchmark calculator component 690 then obtains benchmark performance values in each category. Again, it can simply access those metrics where they have already been calculated, or it can calculate them if they are yet to be derived. The achievement of benchmark performance metrics in each category is indicated by block 746. This information represents a number of benchmarks against which actual performance data can be compared to identify spaces of opportunity. In the example discussed above with
[00210] [00210] The opportunity space identifier component 694 then compares the actual performance values with the benchmark performance values to identify the performance opportunity space. This is indicated by block 748. For example, component 694 can compare the worst performance data for the worst operators in each category (represented by the 712 distribution in Figure 7A) with the performance data for the leading operators in each category (and represented by distribution 714). The difference between those two can quantify a performance opportunity where performance can be improved if the worst traders increase their performance to match that of the top traders. This is, however, a space of opportunity. The 694 component can also compare the actual performance data for the fleet under analysis with the theoretical optimals represented by the 718 and 720 distributions and also the high end 716. The 694 component can compare fleet specific data with data from other fleets or through a plurality of different fleets on the same crop or crops and in the same geographic region. The 694 component can compare actual performance data against other benchmarks in order to identify other performance opportunity spaces as well.
[00211] [00211] Once the performance opportunity spaces are identified, the performance savings component 696 can calculate or access information to identify the savings (in terms of performance) that can be achieved by taking advantage of each of the
[00212] [00212] Financial opportunity space tool 678 then uses financial value mapping component 700 to assign financial values to the various performance savings values generated in block 750. Component 702 identifies the financial opportunity space based on the values and financial savings component 704 calculates the savings (in any desired currency) that can be achieved by taking advantage of financial opportunities (which themselves can be obtained by taking advantage of performance opportunities). Determining the financial opportunity space based on the performance opportunity space is indicated by block 752 in Figure 9.
[00213] [00213] Figure 10 is a flowchart illustrating an example of the 676 performance opportunity space tool in more detail. Figure 10A shows an example of a UI view that illustrates performance opportunity data in tabular form. It will be seen that Figure 10A shows only one example of a UI display and a variety of others could also be used. The information could be shown in graphic or other diagrammatic form, or in a wide variety of other ways. Figures 10 and 10A will now be described in conjunction with one another.
[00214] [00214] In the example described with respect to Figure 10 and 10A, the performance opportunities to be identified are the opportunities reflected as the difference between the lead performance operators
[00215] [00215] Actual Performance Calculator component 692 calculates actual performance values that will be used to identify opportunities. For example, where lead operator performance will be used, component 692 calculates lead operator performance values through the identified performance categories. This is indicated by block 754 in Figure 10A. An example of this is shown in table 756 in Figure 10A. It can be seen from table 756 that the categories arranged in sets are identified in column 758. Each of those sets includes a plurality of different, individual categories identified in column 760. Each of the categories in column 760 can be represented by performance values in specific units, as indicated by column 762. The actual performance values are shown in the rest of table 756. As an example, column 764 shows the performance values for the worst day of the season, across some of the categories. Column 766 shows the mean values of all the worst operators across the categories. Column 768 shows the average lead operator value across categories. THE
[00216] [00216] In any case, block 754 indicates that the actual performance calculator component 692 calculates actual performance values, across the different categories, for the main operators in each category, as shown in column 768, or other groups or individuals which will be used as a basis for comparison to identify opportunities. The actual performance calculator component 692 can also calculate actual performance values, across the various performance categories, for yet other fleet specific groups or individuals that should be used in identifying opportunities. In one example, actual performance data is also calculated for worst operators. This is indicated by block 778 and is shown in general in column 776 of the graph.
[00217] [00217] Benchmark component 690 then calculates a variety of different benchmarks against which actual performance values can be compared, to identify the performance opportunity space. A reference value is a theoretical optimal performance given the current machine configuration. This is indicated by block 780. An example of this is illustrated at column 772 in Figure 10A. The 690 component can also calculate the theoretical optimal performance corresponding to the machines in the fleet under analysis, assuming there are automation updates. This is indicated by block 782. This can also be used as a reference value. The 690 component can also calculate the ultimate theoretical optimal performance for the machines, assuming the machines are energy limited, have maximum technology upgrades and are producing adequate quality product. This is indicated by block 784. Of course, other reference data can be calculated or obtained alike, such as data for key operators across other fleets, on the same crop or crops and in a similar geographic region, or other data.
[00218] [00218] The opportunity space identifier component 694 then compares the actual performance data with the calculated benchmarks to identify a continuous performance opportunity space. This is indicated by block 786. For example, component 694 in the present description compares the leading operator in each category with the average of the worst operators to identify a space of opportunity. This is indicated by block 788. This can compare the average of all operators (or best or worst operators) with any of the theoretical optimals that have been calculated or with fleet cross data. This is indicated by block 790. This can identify the continuous opportunity space in other ways as well and this is indicated by block 792.
[00219] [00219] The performance savings component 696 then calculates
[00220] [00220] Figure 10B shows an example of a 798 user interface display that illustrates this. It can be seen from Figure 10B that the opportunity and performance across some categories is quantified in hours saved and the performance opportunity in other categories is quantified in liters of fuel saved. For example, comparing the top performers with the worst performers in the grain productivity category, it can be seen that the fleet could have saved 37.3 hours if the performance of the worst operators matched the performance of the top operators. If the entire fleet of operators coincided with one of the optimal benchmarks that were calculated, the savings could have been 118.2 hours. Again, it will be noted that these values are, in one example, relative rather than absolute. This adjusts for factors beyond the manager or operator's control (such as average field size, crop yields in the region, etc.).
[00221] [00221] Similarly, if the worst operators had coincided with the main operators in terms of energy use, the fleet could have saved 13.6 hours. If the worst operators matched the top operators in terms of idle time to unload, the fleet would have saved 11.5 hours, and if all operators performed at the optimal level, the fleet could have saved 22.3 hours. In addition, if the worst operators matched the main operators in stationary unload time, the fleet would have saved 5.1 hours. If all operators performed optimally in that category, the fleet would have saved 28.2 hours.
[00222] [00222] The same types of opportunities are identified with
[00223] [00223] As mentioned above, a wide variety of other opportunities can also be identified, such as deviation from a quality objective to sensed grain damage (sensed in the machine or as measured by the elevator) and actual grain loss sensed by the machine and measured with reference to the operator's grain loss preference objective (if adjusted by the operator or manager). These are just examples.
[00224] [00224] Financial opportunity space tool 678 assigns a financial value to each opportunity. Figure 11 is a flowchart illustrating an example of operation of tool 678 in more detail. Tool 678 first receives the performance savings values in each category, which were calculated by the performance savings component 696. Receiving this information is indicated by block 900 in Figure 11. As an example, tool 678 will receive the hours of Calculated savings for each opportunity shown in Figure 10B.
[00225] [00225] The financial value mapping component 700 then accesses a mapping between the performance savings values and financial values for each category. This is indicated by block 902. By way of example, financial value mapping component 700 illustratively identifies a financial value in terms of hourly currency values (such as dollars per hour). As an example, it may be that running a separator costs approximately $500.00 per hour (which can be calculated in any desired way, such as using machine value depreciation). These values are shown illustratively at 904 in Figure 10B. The financial value mapping component 700 also illustratively identifies a currency value to assign to each liter of fuel. In the example shown in Figure 10B, the financial value mapping component 700 designates a value of $1.00 per liter of fuel.
[00226] [00226] Since financial values are assigned to each of the performance savings values in each category, then financial opportunity space identifier 702 identifies the financial opportunity space by calculating a financial amount that could be saved by taking advantage of performance opportunities. These amounts correspond to various financial opportunities.
[00227] [00227] For example, referring again to Figure 10B, the financial opportunity space identifier component 702 indicates that if the worst operators match the top operators in the grain productivity category, then the fleet would have saved $18,650.00 . This is achieved by multiplying the 37.3 hour performance opportunity by $500.00 per hour. Component 702 calculates these financial opportunities for each category shown in Figure 10B.
[00228] [00228] This one does the same for the fuel opportunity. This then designates one dollar per liter of fuel that could be saved, multiplies by the number of liters that could be saved on each opportunity, and identifies this savings amount as the corresponding financial opportunity.
[00229] [00229] The financial savings component 704 then calculates the ultimate savings that could be achieved by increasing performance across the various categories. It can be seen from Figure 7B that if the worst operators had improved their performance to match the top operators across all categories, the fleet would have saved $39,414. If all operators were operating at an optimal level, in all categories, the fleet would have saved $105,197. This information is output to display for further use or analysis by user 674. Calculating the financial savings values based on the performance savings values and outputting the financial savings values is indicated by blocks 906 and 908 in Figure 11.
[00230] [00230] In one example, financial and performance opportunities can be used to also identify performance improvement items. For example, a training facility might have a catalog of training classes that map out identified performance opportunities. Tool 676 can access the mapping to identify training classes that more directly map the identified performance opportunities. As an example, a certain training class might have a strong mapping to increase an operator's performance in energy utilization. Another can be mapped strongly to another performance pillar, such as grain yield. Based on the performance opportunities, tool 676 can identify matching classes and send them to recommendation tool 184 where they can be included in recommendations. The same
[00231] [00231] The performance and financial analysis capabilities not only greatly enhance the performance of an operator, farm manager or other consumer of information, but also greatly enhance the performance of the farm machine or other mobile machine. The information can be used by an operator to make adjustments to mobile machine operation to improve performance, and other users of the information can make better decisions, more accurately and more quickly, regarding fleet operation. These features also improve the performance of the computing system on which they are arranged. By deploying this information more quickly, the user does not need to load the system with additional navigation and search operations. This reduces the system's computing overhead and thus improves its performance.
[00232] [00232] According to another example, the information that is generated, as described above, can be used to identify agronomic variation and perform various analyzes to identify improvements that a given producer can make in order to improve his performance, based on the agronomic variation that the producer is finding. Figure 12 is a block diagram of the 939 agronomic variation architecture that includes an example of a 940 agronomic variation analysis system for performing these types of analyzes and for generating recommendations. Figure 13 is a flowchart illustrating an example of the system operation shown in Figure 12. First, it is worth noting that for the agronomic variation analysis system 940 to perform its analyses, the raw data sensing layer 116 and the
[00233] [00233] In the example shown in Figure 12, agronomic variation analysis system 940 illustratively includes variation opportunity space tool 942, prescriptive component recommendation system 944 and may include other items 946. System 946 is also
[00234] [00234] Yield data illustratively include underlying data that includes grain yield and non-grain materials (MOG). These can also include a production/MOG ratio, as examples. Grain production can be sensed by a mass flow sensor that measures the mass flow rate of grain through the machine. MOG production can also be measured, for example, by having optical sensors that identify material other than grain entering the machine (eg entering the clean grain compartment) or other sensors. Both yields can be correlated to geographic position in such a way that variation across a field can be calculated. The same is true for the reasons.
[00235] [00235] Topology data 952 illustratively includes data that can be used to obtain elevation and slope variation across the field. This can affect moisture availability and definitely production variance as well as machine performance. Topology data can, for example, include sensed pitch data, which indicates the pitch of the machine as it moves through the field. These may also include rotational motion data that senses rotational motion (such that the rotational variation can be identified) across the field. This, for example, can be used to generate side slope information indicating whether the machine is frequently operating on the sides of slopes.
[00236] [00236] Crop property data 954 illustratively includes information that can be used to understand regional effects for a given
[00237] [00237] The 956 time of day variation data may include information that can be used to identify the level of variation that exists across the harvest window (for example, across a time of day) that can benefit from prescriptive components . For example, yield variation can be sensed by sensing the grain yield level over time and considering yield variations over time. Quality variation can also be obtained. For example, fuel economy (eg tool fuel consumption and sensed grain mass flow rate) can be identified and their respective variations over time can also be identified. Sensed grain loss (eg loss of separator and loss of brake) can be calculated over time, and the
[00238] [00238] All of this information can be obtained by the agronomic variation analysis system 940. The variation opportunity space tool 942 can identify variation opportunity spaces and the prescriptive component recommendation system 944 can generate prescriptive control components that can be used to address the various opportunities for agronomic variation identified by tool 942.
[00239] [00239] Before describing the 940 agronomic variation analysis system in more detail, a brief overview will first be provided. The variation opportunity space tool 942 illustratively includes the agronomic variation identifier component 960, the variation opportunity space identifier component 962, the opportunity aggregation component 964, and may include other items
[00240] [00240] Prescriptive component recommendation system 944 illustratively includes opportunity mapping tool for component 968, map store 970 (which itself illustratively includes opportunity map for component 972 and may include
[00241] [00241] System 940 also illustratively includes UI component 943 and processor 945. These may be the same UI components and processors described previously, or different. It illustratively generates UI 941 views for user 947. As described below, user 947 can be
[00242] [00242] Figure 13 is a flowchart illustrating the overall operation of system 940 in more detail. Figures 12 and 13 will now be described in conjunction with one another.
[00243] [00243] The 940 system first determines that an analysis of agronomic variation must be performed. This is indicated by block 990 in Figure 13. This can be done in a wide variety of different ways. For example, system 940 may receive input from user 992 indicating that the user is requesting system 940 to generate an analysis of agronomic variation. In another example, the 940 system can automatically generate the analyzes intermittently or periodically or at 994 data instants. Additionally, the 940 system can be configured to intermittently or continuously monitor the various types of agronomic variation and generate a full agronomic variation analysis (including recommendations 980) when one or more types of agronomic variation reach a threshold level. This is indicated by block 996. System 940 can determine that an analysis of agronomic variation must be performed in other ways as well, and this is indicated by block 998.
[00244] [00244] The variation opportunity space tool 942 (eg the agronomic variation identifier 960) then accesses the agronomic variation parameters in the data store 948 (or elsewhere). This is indicated by block 1000 in Figure 13. Again, as briefly described above, agronomic variation parameters may include underlying yield data 950, topology data 952, crop property data 954, time of day variation data 956 or other data 958. The 960 component then calculates the agronomic variation, by field, across multiple different dimensions. This is indicated by block 1002 in Figure 13. For example, it can calculate the production data variation level through a field, the topology variation level,
[00245] [00245] The 960 component then illustratively combines the various agronomic variations calculated for the various dimensions, to obtain a composite agronomic variation parameter for a given field. This is indicated by block 1004. An example of a calculation that can be used to obtain the composite variation parameter is identified as equation 1 below: Agronomic variation = x1 * STD (grain yield) + x2 * STD (MOG yield) + x3 * STD (step) + ... Equation 1 where the factors x1, x2, x3 ... are illustratively weighting factors that are used to fine-tune the calculation of agronomic variation through the standard deviation of each included parameter (or dimension ). It will be noted that this is, however, a method for calculating agronomic variation and any of a wide range of other mechanisms can also be used, including respective distribution functions for each contributing parameter (or dimension), etc.
[00246] [00246] Variation opportunity space identifier component 962 then classifies an opportunity space, by field (or by field section or even a more or less granular basis), based on the composite variation parameter that component 960 calculated to the field. This is indicated by block 1006. This can be done, for example, in the same way discussed above with respect to identifying performance opportunities and various financial opportunities.
[00247] [00247] The opportunity aggregation component 964 then aggregates the identified opportunity spaces, across the various fields, to a
[00248] [00248] The opportunity mapping tool for component 968 then accesses the mappings in the 970 map store, based on the variation opportunity spaces identified by tool 942, to identify various prescriptive components that can be used by the producer to address the variations agronomics. These mappings are indicated by block 972. Accessing them is indicated by block
[00249] [00249] Prescriptive components can fall into a variety of different categories. For example, these could be components that change the configuration of the mobile machine. This is indicated by block 1013. These can be prescriptive components that provide different levels of machine automation and control. This is indicated by block 1012. As examples, these can be components that increase the level of automated control of the mobile machine. These can also be prescriptive components that provide a higher level or definition of data collection and reporting. This is indicated by block 1014. For example, if yields are relatively constant across a given field, then the yield sensor does not need to provide high definition sensor outputs, because the sensed yield does not vary much across the entire field. However, where there is a large variation in production across a given field, the producer can benefit from a higher definition production sensor which obtains production sensor measurements more frequently, or which provides higher definition production sensor data for reporting, mapping, etc. This will allow the producer to more accurately track production variation across the field. This is just one example of a 1014 data collection and reporting component.
[00250] [00250] Of course, prescriptive components can be a wide variety of other components, and this is indicated by block 1016. Other prescriptive components, for example, can include assemblies for side elevations that allow various items of machinery to operate more effectively at elevations or where there is relatively large variation in slope or topology. Different levels of machine automation and control components can include components that include more or less sophisticated automation, depending more or less on the operator controlling the machine. Another example of a data collection and reporting component 1014 might be different reporting systems. A first report can be just a monitor system where the various sensed parameters and performance information are just displayed in the cab, to the operator. Another level of reporting might include reporting on standard definition data and providing it to a standard set of users. Yet another level might include generating those same reports, or additional or different reports, using high definition data where the data is sampled at a higher sampling rate. These are just examples.
[00251] [00251] Once tool 968 has identified the various components that can be used to address agronomic variation, it illustratively provides opportunity information identifying opportunities for agronomic variation and an identity of the various prescriptive components that have been identified to address those variations, for the financial opportunity space tool
[00252] [00252] Recommendation tool 976 then generates and issues one or more recommendations 980. This is indicated by block 1020 in Figure 13. Recommendations 980 can then be issued to a wide variety of different users 947. This is indicated by block 1022 For example, these can be stored locally or remotely as indicated by block 1024. These can be issued to producer 1026. These can be issued to multiple 1028 component manufacturers or to multiple 1030 Agronomic Service Providers (ASPs). these can be issued by a wide variety of others as well, such as agronomists, seed companies, equipment manufacturers, service vendors, etc., and this is indicated by block 1032.
[00253] [00253] It can then be seen that the 940 system can be used to dramatically improve the performance of various machinery. This can quickly bring up relevant information that can be mapped to different prescriptive components or techniques that can be used to improve the overall performance of the machine or a group of machines. It can also be used to modify the configuration of a machine, the various components arranged in the machine, or a variety of other things that can be used to increase the machine's performance and efficiency. Additionally, it can increase the performance of the 940 itself. By way of example, as this brings up relevant information more quickly, the user does not need to browse through the various data stores or
[00254] [00254] In another example, the mechanisms described here can be used not only to assess the performance of machine operators of a given machine, but can also be used to assess the performance of teams of operators in a workplace. Many different types of teams can use these mechanisms. For example, forestry operations use teams of workers as they carry out construction operations, various agricultural operations, and so on. As an example, sugarcane harvesting operations often have fronts (or teams) of people who are used to support a sugarcane harvest. The front includes the sugarcane harvester and its operator, along with a set of one or more drivers who drive tractors pulling transport cars (or billet cars). Tractor drivers drive tractors to unload sugarcane from the combine and transport it to a temporary storage location where it can be loaded under road trucks. Road trucks carry the loaded sugarcane to a mill for processing. Thus, equipment on a front can include a combine, multiple tractors each pulling one or more billet cars. These can also be temporary storage equipment that unloads the sugarcane from the billet cars and places it on the road trucks that transport the sugarcane to the mill. Each of these equipment items has an operator. Each front often has a leader of
[00255] [00255] In some larger sugarcane operations, an operation owner may have large amounts of sugarcane acreage (sometimes more than a million acres). The owner may have hundreds of different combines that are operated day and night. So each combine has three-four different operators assigned to it, each combine also has two-three tractors with haul cars that operate with it, and each of these tractors operates day and night and so has three-four different operators assigned to it. . A single operation can therefore have thousands of different pieces of equipment that are assigned to a team or front. Teams can collectively number thousands of different people as well.
[00256] [00256] Although the above scenario includes a very large owner who may have hundreds of different sugarcane harvesters, etc., the same mechanisms described here can be applied to much smaller owners who may only own one harvester, or a relatively small number. of combine harvesters and associated fronts (or teams). These can also be applied in other agricultural, forestry, construction environments or operations. Therefore, the present mechanisms have wide applicability and scale accordingly, based on the size of an organization to which they are applied.
[00257] [00257] The mechanisms described here can be applied to all operators working on a given team or front, collectively to arrive at one or more performance scores for each team or front.
[00258] [00258] Figure 14 is a block diagram of an illustrative 1050 team analysis architecture that can be used to analyze and generate reports indicative of the performance of different teams. Architecture 1050 illustratively includes a team analysis system 1052 which is shown having access to data store 1054. Data store 1054 can be any of the other data stores discussed above, or a separate data store. The 1050 architecture also shows that, in one example, the 1052 system can generate 1056 user interface views with 1058 user input mechanism(s) for interaction by the 1060 user. illustratively 1062 team reports and 1064 recommendations and provides them to the 1060 user through the 1056 user interface views. The 1052 system may also provide them to other 1066 systems.
[00259] [00259] Before describing the operation of the 1050 architecture, a brief description of a number of the items in the 1050 architecture will initially be provided. Data store 1054 illustratively includes reports 110, as discussed above. It may illustratively include any set or subset of the underlying data indicated by block 1068. This may include additional information, in addition to information that also forms the basis of operator performance reports 110. In one example, the reports and underlying data are indexed by team. This is indicated by block 1070. Data store 1054 can include a wide variety of other 1072 information as well.
[00260] [00260] The example shown in Figure 14 illustrates that the team analysis system 1052 illustratively includes the individual metric calculator system 1074 which itself includes calculator components that
[00261] [00261] The 1052 system may also include the 1084 comparison component. The 1084 component can be used to compare the various scores for the various teams to generate an output rating that ranks the individual teams according to their performance scores. The 1084 component can also be used to compare the team's score with the score for the same team, from a previous time period. For example, it can compare team scores for the same team on consecutive days, across different time periods, for the same field in different years, etc. Also, it can compare the team score with scores for other teams, with an average score for a set of other teams, scores for teams in similar geographic regions, with the best score or scores for similarly situated teams, etc.
[00262] [00262] The 1052 system may also include the report generator
[00263] [00263] The 1052 system may also include the 1088 recommendation tool. The 1088 tool can take the information calculated by the rest of the system, and generate 1064 recommendations to improve overall performance for individual teams, and the group of teams as a whole. By doing this, the 1088 tool can access 1090 recommendation rules or heuristics. This can generate recommendations in other ways as well. For example, performance scores, or the underlying data that is used to generate performance scores, can be mapped to recommendations. Therefore, based on performance scores and/or underlying data, the 1088 tool can access that mapping to identify recommendations that can be used to improve the performance of a team, a group of teams, etc.
[00264] [00264] In one example, the 1052 system may illustratively include the 1092 processor, the 1094 user interface component, and may also include other 1096 items. The 1094 user interface component may either by itself or under the control of others items in the 1050 system, generate 1056 UI displays. The 1092 processor can be used to facilitate the functionality of items in the 1052 system, or this can be facilitated in other ways as well.
[00265] [00265] Figures 15A and 15B (collectively referred to herein as Figure 15) show a flow diagram illustrating an example of the operation of the 1050 team analysis architecture. Figure 14 and 15 will now be described in conjunction with one another.
[00266] [00266] The 1052 team review system first determines that a team review must be performed. This is indicated by block 1098 in Figure 15. This can be done in a variety of different ways. For example, team analysis can be performed substantially
[00267] [00267] Analysis can also be performed on an individual's request. For example, the operator of a cane harvester can request the system to run a team analysis system such that a current performance score for the team can be displayed. Others may require this to be done as well. This is indicated by block 1102.
[00268] [00268] The analysis can be performed on a periodic basis (such as at the end of each shift, at the end of each day, etc.). This is indicated by block 1104.
[00269] [00269] This can be performed in response to one or more different performance scores reaching a threshold value. For example, if a team is performing poorly, and its performance score reaches a low score threshold, this can cause the 1052 system to run an analysis for each team. Similarly, if any of the individual metrics for a team fall below a threshold, this could indicate that the 1052 system must perform an analysis for each team. Of course, performing a team analysis in response to other items reaching other thresholds can also be done. Performing team analysis in response to a threshold value being reached is indicated by block 1106. The 1052 system may also determine that a team analysis must be performed in other ways as well, and this is indicated by block 1108.
[00270] [00270] The 1052 system then selects or identifies a team for which the analysis is to be performed. This is indicated by block 1110. For example, where an operator or team member requests that it be performed
[00271] [00271] The individual metrics calculator system 1074 then accesses operator 110 performance reports and/or any underlying or other data that the 1074 system will operate, for the selected team members. This is indicated by block 1112. The particular data being accessed may vary, based on different applications (such as for a sugarcane application, a construction site application, a forestry workplace application, other forestry applications farm, etc.). Also, even within the same application, the data being accessed can vary. For example, these can vary based on the performance indicators that a given user wants to use. It may be that an owner or operator believes that certain performance metrics are more important than others. In that case, system 1074 may access a certain subset of the data in data store 1054. In another example, an owner or operator may believe that other performance metrics are more important. In that case, the 1074 system may access a different subset of the data in the 1054 data store. In either case, the 1074 system gains access to the data it uses to calculate individual performance metrics for the individual elements of the selected team.
[00272] [00272] The individual metric calculator system 1074 then
[00273] [00273] By way of example only, some of the KPIs that are calculated for individual members might be the total amount of crop processed during a selected period of time. This is indicated by block 1118. For example, where the staff member is the operator of a sugarcane harvester, the 1074 system can calculate the total tonnage of sugarcane that has been harvested. Where the team member is the driver of a tractor hauling a billet car, this may be the total tonnage of billets that have been hauled by that driver.
[00274] [00274] KPIs could also include, for example, the amount of crop processed per unit of fuel. This is indicated by the block
[00275] [00275] Additionally, KPIs may include a measure of damage
[00276] [00276] KPIs can also include time measurements such as the average travel time between the sugarcane harvest and the temporary storage area or transfer point where the sugarcane is transferred to a road truck. This is indicated by block 1126.
[00277] [00277] KPIs can include the local amount of equipment allocated to the team the team member is on. For example, the team may be allocated three tractors pulling billet wagons. The team may be allocated 2.5 tractors (with one splitting its time between two different teams). Equipment allocation is indicated by block 1121.
[00278] [00278] KPIs can include one or more security metrics. This is indicated by block 1130. For example, safety metrics can identify the number of times the individual team member has been involved in an accident, the severity of the accident, etc. Safety metrics can include not only operations that are unsafe for workers, but also operations that are unsafe for machines.
[00279] [00279] KPIs can be correlated to position as indicated by block 1132. These can also be correlated to topography as indicated by block 1134, or to time as indicated by block 1136. These items can also provide significant insight. For example, when KPIs are correlated to position, they can indicate that a given team member was operating in a particularly difficult field. Similarly, where these are correlated to topology, this could indicate that the operator was operating in an uneven topographic area (such as a hilly area - where perhaps fuel efficiency might suffer). Where these are correlated to time, this could indicate that a particular team member was operating at an advantageous time of day, or at a disadvantageous time of day. As an example, if the team member is operating on a shift that includes early morning hours, this could indicate that the crop has excess moisture (such as dew). This can affect the overall operation and efficiency of the team.
[00280] [00280] Of course, KPIs can be a wide variety of other KPIs as well. This is indicated by block 1138.
[00281] [00281] Since all individual KPIs are calculated by the 1074 system for a selected team member, these can be stored and also provided to the 1080 team member composite score generator. The 1080 generator illustratively aggregates the metric to obtain a composite performance score for the team member
[00282] [00282] The 1074 system then determines if there are any more team members for which individual and composite performance scores should be generated. This is indicated by block 1142. If so, processing reverts to block 1114 where the next team member is selected. If not, however, all information generated by the 1074 system can be provided to the 1080 team member composite score generator. The 1080 generator aggregates the individual performance metrics to get aggregated metrics for all team members. This is indicated by block 1144. By doing this, it may be that some of the metrics can be combined for all team members. In another example, only a subset of the metrics can be calculated for all team members, while other subsets are calculated for different team members. For example, it may be that certain metrics pertain only to tractor drivers, or combine operators, but not both. In this case, those metrics will only be aggregated to tractor drivers, combine operators, etc.
[00283] [00283] In addition, some metrics may be team metrics that are calculated for the team as a whole, without being calculated for individual team members. In this case, the 1080 generator calculates the team metrics for the selected team. This is indicated by block 1146. For example, the total amount of crop processed from the sugarcane harvester to the road truck can be a metric that is calculated for the team as a whole. If the individual member's metric (such as the amount of cultivation processed by each individual) were
[00284] [00284] The 1082 team score aggregator then considers the various scores and metrics that have been calculated by the other components and generates a composite score for the selected team. This is indicated by block 1148. The composite score can be made up of a wide variety of scores other than individual scores and metrics, team metrics, composite scores for individual team members, or other information. These can also be aggregated according to a wide variety of different aggregation mechanisms. Aggregation can include sum, a weighted combination of individual values, a weighted combination of average values, or a wide variety of other aggregations. In any case, a composite score that is indicative of the overall operation of the team is generated.
[00285] [00285] The 1052 system then determines if there are more teams to be analyzed at this time. If so, processing reverts to block 1010 where the next team is selected. This is indicated by the block
[00286] [00286] If there are no more teams to analyze at this point in time, then the 1084 comparison component can generate a wide variety of different types of comparisons that can be useful in analyzing a team's performance. For example, teams can be compared to each other. The 1084 component can also generate a sorted list of
[00287] [00287] At some point (such as while the 1052 system is calculating various metrics for another team) the 1086 report generator generates one or more team reports that indicate the analysis results for the team that was just analyzed. This is indicated by block 1154. Team reports (represented by the number 1062 in Figure 14) can take a wide variety of different forms. These can be relatively simple, indicating only the composite score for a given team, or they can be relatively detailed, indicating not only the composite score, but also the underlying scores and metrics that were used to generate the composite score, in addition to other data . In yet another example, reports are generated with navigable links (or user input mechanisms). The report can show a top-level global composite score for a team and can provide a drill-down link (or button or other input mechanism) that can be actuated to allow a user to drill down to the various detailed information that was used in generating the composite score. All of these and other reporting formats are covered here.
[00288] [00288] In one example, the 1088 recommendation tool also generates recommendations based on information in the report or other data. This is indicated by block 1156. Recommendations can be of wide variety and can be correlated to various scores or underlying data. For example, if the metric that indicates the average amount of time to take a trip from the combine to the staging area or transfer point is relatively large, the recommendation might be to relocate the staging point more
[00289] [00289] Analysis results can then be sent to multiple destinations in various ways. This is indicated by block 1158. For example, reports and recommendations can be issued to the team leader, to a field coordinator who is coordinating multiple different teams in a given field or in a given geographic location, to the owner or to a wide variety of other people. However, individual team member metrics and scores can be provided to individual team members either at the end of a shift, periodically, or in real-time (or near real-time) by providing a UI display or other input mechanism that indicates the current score for the individual operator for the team or both. These are just examples.
[00290] [00290] Metrics, scores, reports and/or recommendations can be stored for later use either within a piece of equipment the team is using or another piece of equipment or at a remote server location. Storing them for later use is indicated by block 1160. Information can also be provided by real-time feedback to operators. This is indicated by the block
[00291] [00291] Reports and recommendations may also be issued at the end of a reporting period. This is indicated by block 1164.
[00292] [00292] Information can be issued to the field coordinator or team coordinator. This is indicated by block 1166. This can be issued to an owner of the organization that disposes of the system. This is indicated by block 1168. This may be issued to equipment vendors or manufacturers or other service providers who provide maintenance or other services relating to the equipment. This is indicated by block 1170. By way of example, the seller or service provider may identify that some combines, tractors, wagons, service vehicles, etc., need to undergo certain types of maintenance or repair. When they receive such reports, they can then pre-order the different parts that are needed to carry out maintenance or repair, and so the work can be done more quickly.
[00293] [00293] Information can be provided to the mill (or other purchaser of the sugarcane or a crop being harvested). This is indicated by block 1172. Information can be output to other systems as well, and this is indicated by block 1174.
[00294] [00294] In the example where information (metrics, scores, reports and/or recommendations, etc.) is entered in an interactive fashion, the 1086 report generator can receive user interaction with the report. This is indicated by block 1176. If report generator 1086 receives user interaction with a report, the report performs an action based on the user interaction. This is indicated by block 1178. For example, a report might be generated that presents the composite scores of
[00295] [00295] In this case, the 1086 report generator navigates the user to a next detail view that displays the next level of detail information to the user. The drillthrough interaction is indicated by the block
[00296] [00296] Report generator 1086 may also provide user input mechanisms that allow the user to change the way in which information is displayed. This is indicated by block 1184. For example, there may be user input mechanisms that allow the user to have the data displayed as a histogram, as a graph, along a timeline, as a bar graph, or in a map. As an example, the 1086 report generator can generate a view that shows the team's overall performance score (or the individual metrics that make up that score) correlated to where the team was operating in the field. In that case, the 1086 report generator can generate a map showing how the team performance score varied to
[00297] [00297] Additionally, there are a wide variety of other user interactions that can be detected, and the corresponding actions can be performed by report generator 1086. This is indicated by block 1186.
[00298] [00298] It can then be seen that the team analysis system provides significant advantages. It is extremely difficult for teams or fronts (or workers on a construction site, on a forestry site, on a site where forage harvesting or other crops are being conducted, a site where material is being sprayed onto a crop, where a crop is being planted, etc.) to compare its performance with those of other teams or fronts on any set of criteria, however very simple (such as acres harvested). Even so, the comparison does not explain differences in conditions, terrain, topology, etc. Additionally, the present system advantageously allows teams to obtain an indication of their overall performance and detailed information, in a detailed set of metrics, which further characterize their performance. The present system also allows teams to compare themselves advantageously with other teams, even explaining variations in different fields, types of cultivation, cultivation conditions, topology, etc. As the system detects, aggregates and details this type of information, it can lead to significantly improved performance of the team and individual team members. Additionally, as the system automatically senses this information and details the information, it improves the operation of the system itself. A user does not need to conduct multiple queries to the system, and use overhead computing and memory resources, to fetch this information and then aggregate it to the system. Instead, the system automatically generates the information and stores it in real time (or near real time). This reduces the number of searches initiated in the system to generate reports. This
[00299] [00299] This discussion has mentioned processors and servers. In one embodiment, the processors and servers include computer processors with associated memory and timing circuitry, not shown separately. These are functional parts of the systems or devices to which they belong and are activated and facilitate the functionality of other components or items on those systems.
[00300] [00300] Also, a number of user interface views may have a wide variety of different user-actuable input mechanisms arranged in them. For example, user-actuable input mechanisms can be text boxes, check boxes, icons, links, drop-down menus, search boxes, etc. These can also be operated in a wide variety of different ways. For example, these can be actuated using a point-and-click device (such as a ball or mouse). These can be actuated using hardware buttons, keys, a joystick or keyboard, thumb switches, thumb pads, etc. These can also be actuated using a virtual keyboard or other virtual actuators. In addition, where the screen on which they are displayed is a touch-sensitive screen, these can be actuated using touch gestures. Also, where the device displaying them has voice recognition components, these can be actuated using voice commands.
[00301] [00301] A number of data stores have been discussed. It will be noted that each can be fractionated into multiple data stores. All may be local to the systems accessing them, all may be remote, or some may be local while others are remote. All these configurations are covered here.
[00302] [00302] Also, the figures show a number of blocks with
[00303] [00303] Figure 16 is a block diagram of the architecture 100, shown in Figure 1, and those shown in Figures 2, 7, 12 and 14, except that elements are arranged in a cloud computing architecture
[00304] [00304] The description is intended to include both public cloud computing and private cloud computing. Cloud Computing
[00305] [00305] A public cloud is managed by a vendor and typically supports multiple consumers using the same infrastructure. Also, a public cloud, as opposed to a private cloud, can free end users from managing the hardware. A private cloud can be managed by the organization itself and the infrastructure is typically not shared with other organizations. The organization still maintains the hardware to a certain extent, such as installations and repairs, etc.
[00306] [00306] In the modality shown in Figure 16, some items are similar to those shown in Figures 1, 2, 7, 12 and 14 and these are numbered similarly. Figure 16 specifically shows that layers 104, 106, 108 and systems 660, 940 and 1052 can be located in cloud 502 (which can be public, private, or a combination where portions are public while others are private). Therefore, users 101, 674, 947 or 1060 can operate machine 102 or access those systems or other systems using a user device. User 101, for example, may use a user device 504 on machine 102. User 674, for example, may use a different user device 504. Machine 102 can access layers 104, 106, and 108 through cloud 502. User 674 can access system 606 through cloud 502 and users 947, 1060, 509, and 166 can also access data and systems through cloud 502.
[00307] [00307] Figure 16 also displays another modality of a cloud architecture. Figure 16 shows that it is also contemplated that some elements of architecture 100, those in Figures 2, 7, 12 or 14 may be arranged in cloud 502, while others are not. By way of example, data store 114 may be arranged outside of cloud 502 and accessed through cloud 502. In another example, layer 108 (or others
[00308] [00308] Additionally, Figure 16 shows that a remote display component 507 (which may be another user device or another component) can be used by one or more viewers 509, 1066 that are remote from the machine 102. The viewers 509, 1066 may include user 674 or other viewers who can view the reports, opportunity or variance information, or staff information or other information, if properly authenticated.
[00309] [00309] It will also be noted that architecture 100, or portions thereof, or system 660 or the other architectures and systems can be arranged on a wide variety of different devices. Some of those devices include servers, desktop computers, laptop computers, tablet computers or other mobile devices such as portable computers, cell phones, smart phones, multimedia players, personal digital assistants, etc.
[00310] [00310] Figure 17 is a simplified block diagram of an illustrative embodiment of a portable or mobile computing device that can be used as a user or client portable device 16 on which the present system (or parts thereof) can be arranged . Figures 18-22 are examples of portable or mobile devices.
[00311] [00311] Figure 17 provides a general block diagram of the components of a client device 16 that can run components of architecture 100 or system 660 or architectures in Figures 12, 14 or 16, or
[00312] [00312] Under other embodiments, applications or systems are received on a removable Secure Digital (SD) card that is connected to an SD card interface 15. The SD card interface 15 and communication links 13 communicate with a processor 17 ( which can also realize processors 140, 155, 163, 186, 680, 945 or 1092 from Figures 2, 7, 12 and 14) along a bus 19 which is also connected to memory 21 and input/output components ( I/O) 23, as well as clock 25 and location system 27.
[00313] [00313] The I/O components 23, in one modality, are provided to facilitate input and output operations. I/O components 23 for various modalities of device 16 can include input components such as buttons, touch sensors, multi-touch sensors, optical and video sensors, voice sensors, touch screens, proximity sensors, microphones, sensors. tilt and gravity switches and output component such as a display device, a speaker and/or a print port. Other I/O 23 components can also be used.
[00314] [00314] The clock 25 illustratively comprises a real-time clock component that emits a given time. It can also, illustratively, provide timing functions for the processor 17.
[00315] [00315] The location system 27 illustratively includes a
[00316] Memory 21 stores operating system 29, network settings 31, applications 33, application setting settings 35, data storage 37, communication controllers 39 and communication setting settings 41. Memory 21 may include all the types of tangible, volatile, and non-volatile computer-readable memory devices. This may also include computer storage media (described below). Memory 21 stores computer-readable instructions which, when executed by processor 17, cause the processor to perform computer-implemented steps or functions in accordance with the instructions. Processor 17 can be activated by other components to facilitate its functionality as well.
[00317] [00317] Examples of network 31 configurations include such things as proxy information, Internet connection information, and mappings. Application setup tweaks 35 include tweaks that tailor the app for a specific enterprise or user. Communication configuration settings 41 provide parameters for communicating with other computers and include items such as GPRS parameters, SMS parameters, connection usernames and passwords.
[00318] [00318] The apps 33 can be apps that have previously been stored on the device 16 or apps that are installed during use, although these can be part of the operating system 29 or also hosted externally to the device 16.
[00319] [00319] Figure 18 shows an embodiment in which device 16 is a tablet computer 601. In Figure 18, computer 600 is shown with the display of the user interface from Figure 10A displayed on display screen 603. The screen 603 it can be a touch screen (so user 605 finger gestures can be used to interact with the app) or a pen-enabled interface that receives input from a pen or stylus. It can also use an on-screen virtual keyboard. Of course, it can also be attached to a keyboard or other user input device via a suitable attachment mechanism, such as a wireless link or USB port, for example. Computer 600 can also illustratively receive voice inputs.
[00320] [00320] Figures 19 and 20 provide additional examples of devices 16 that can be used, although others can also be used. In Figure 19, a multifunction cell phone, smart phone, or mobile phone 45 is provided as device 16. Phone 45 includes a set of alphanumeric keypads 47 for dialing telephone numbers, a display 49 capable of displaying images including application images, pages web, photos and video, and control buttons 51 to select items shown on the display. The phone includes an antenna 53 for receiving cellular phone signals such as General Packet Radio Service (GPRS) and 1Xrtt and Short Message Service (SMS) Signals. In some embodiments, the phone 45 also includes a Secure Digital (SD) card slot 55 that accepts an SD card.
[00321] [00321] The mobile device of Figure 20 is a personal digital assistant (PDA) 59 or a tablet computing or multimedia playback device, etc. (hereinafter referred to as PDA 59). The PDA 59 includes an inductive screen 61 that senses the position of a stylus 63 (or other pointers, such as a user's finger) when the stylus is positioned on the screen. This allows the user to select, highlight and move items in the
[00322] [00322] Figure 21 is similar to Figure 15, except that the phone is a smart phone 71. The smart phone has a touch sensitive display 73 that displays icons or tiles or other user input mechanisms. Mechanisms 75 can be used by a user to run applications, make calls, perform data transfer operations, etc. In general, the smart phone 71 is built on a mobile operating system and offers more advanced computing power and connectivity than a multifunction cell phone. Figure 22 shows telephone 71 with the display of Figure 10B displayed on it.
[00323] [00323] Note that other forms of devices 16 are possible.
[00324] [00324] Figure 23 is a modality of a computing environment in which the architecture 100 or the other architectures or parts of them (for example) can be arranged. Referring to Figure 23, an exemplary system for implementing some embodiments includes a general purpose computing device in the form of a computer 810. Components of computer 810 may include, but are not limited to.
[00325] [00325] Computer 810 typically includes a variety of computer-readable media. Computer readable media can be any available media that can be accessed by computer 810 and includes both volatile and non-volatile media, removable and non-removable media. By way of example and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media is different and does not include a modulated data signal or carrier wave. They include hardware storage media including both volatile and non-volatile, removable and non-removable media implemented in any method or technology for storing information, such as computer readable instructions, data structures, program modules or other data. Computer storage media include, but are not
[00326] [00326] System memory 830 includes computer storage media in the form of volatile and/or non-volatile memory such as read-only memory (ROM) 831 and random access memory (RAM) 832. An input/output system Basic 833 (BIOS), containing the basic routines to help transfer information between elements within computer 810 such as during startup, is typically stored in ROM 831. RAM 832 typically contains data and/or program modules that are readily accessible and/or currently being operated by processing unit 820. By way of example, and not limitation, Figure 23 illustrates operating system 834, application programs 835, other program modules 836, and program data 837.
[00327] Computer 810 may also include other removable/non-removable, volatile/non-volatile computer storage media. By way of example only, Figure 23 illustrates a controller for
[00328] [00328] Alternatively, or in addition, the functionality described here can be performed, at least in part, by one or more logical hardware components. For example, and without limitation, illustrative types of logical hardware components that can be used include field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), standard program-specific products (ASSPs), system systems on a chip (SOCs), complex programmable logic devices (CPLDs), etc.
[00329] [00329] The controllers and their associated program storage media discussed above illustrated in Figure 23 provide storage of computer readable instructions, data structures, program modules and other data for the computer 810. In Figure 23, for example, hard disk controller 841 is illustrated as storing operating system 844, application programs 845, other program modules 846, and program data 847.
[00330] [00330] A user can enter commands and information into the computer 810 through input devices such as a keyboard 862, a microphone 863, a pointing device 861 such as a mouse, movable ball and touch pad. Other input devices (not shown)
[00331] [00331] The 810 computer is operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 880. The remote computer 880 can be a personal computer, a laptop computer, a server, a router, a network PC, a peer device, or other common network node, and typically includes many or all of the above-described elements relating to the computer.
[00332] [00332] When used in a LAN network environment, the 810 computer is connected to LAN 871 through a network interface or 870 adapter. When used in a WAN network environment, the 810 computer typically includes an 872 or 870 modem another means of establishing communications over the WAN 873, such as the Internet. The 872 modem, which can be internal or external, can be connected to the 821 system bus via the 860 user input interface, or other suitable mechanism. In a networked environment, program modules
[00333] [00333] It should also be noted that the different modalities described here can be combined in different ways. That is, parts of one or more modalities can be combined with part of one or more other modalities. All of this is contemplated here.
[00334] [00334] Although the subject matter has been described in language specific to structural resources and/or methodological actions, it should be understood that the subject matter defined in the appended claims is not necessarily limited to the specific resources or acts described above. Instead, the specific features and actions described above are described as example ways to implement the claims.
权利要求:
Claims (60)
[1]
1. Computer-implemented method, characterized in that it comprises: receiving (232) a first set of data (120), corresponding to a given operator (101), based on sensor signals indicative of sensed parameters (208, 210, 212), sensed in an agricultural machine (102) while the given operator (101) is operating the agricultural machine (102); evaluating (244) the first data set (120) with a reference data set (156) to obtain an evaluation value (122) indicative of how the first data set (120) compares to the reference data set. reference (156); generating (272) a performance score (124) based on the evaluation value (122) corresponding to the given operator (101) indicative of a performance of the given operator (101) in operating the machine 102; and generating (284) an operator performance report (110) based on the performance score (124).
[2]
2. Computer-implemented method according to claim 1, characterized in that evaluating (244) the first set of data (120) in relation to the reference data set (156) comprises: evaluating (244) the first set (120) with respect to historical reference data (252) for the given operator (101).
[3]
3. Computer-implemented method according to claim 1, characterized in that the given operator (101) works for an organization and in which it evaluates (244) the first data set (120) in relation to the data set of reference (156) comprises: evaluating (244) the first set of data (120) against historical reference data (254) for other operators in the organization.
2 / 18
[4]
4. Computer-implemented method according to claim 1, characterized in that evaluating (244) the first set of data (120) in relation to the reference data set (156) comprises: evaluating (244) the first set (120) against historical reference data (256) for other operators that have a threshold performance score.
[5]
5. The computer-implemented method of claim 1, characterized in that receiving (232) the first data set (120) comprises: receiving (232) a plurality of data subsets, each data subset indicative of performance of the operator in one of a plurality of corresponding performance categories, wherein evaluating (244) comprises evaluating (244) each of the data subsets against reference data (156) for the corresponding performance category to obtain a value of evaluation (122) corresponding to each performance category.
[6]
6. Computer-implemented method according to claim 5, characterized in that evaluating (244) comprises: evaluating (244) each of the data subsets against the reference data (156) for the corresponding performance category, to obtain a plurality of evaluation values (122) corresponding to at least some of the performance categories, wherein generating (272) the performance score (124) comprises generating (272) a performance score (124) for each of the performance categories, based on the assessment value (122) for each of the performance categories.
[7]
7. Computer-implemented method according to claim 6, characterized in that it generates (272) the score of
3 / 18 performance (124) comprises: generating (288) a composite performance score (190) indicative of the operator's overall performance for the given operator (101), based on the performance scores (124) generated for each performance category .
[8]
8. Computer-implemented method according to claim 7, and characterized in that it further comprises: generating (290) operator recommendations (192) based on the performance scores (124) for each performance category, the operator recommendations (192) being indicative of operational changes to improve performance scores (124).
[9]
9. Computer-implemented method according to claim 6, characterized in that generating (272) a performance score (124) for each of the performance categories comprises: generating (272) a productivity performance score (124 ) indicative of a performance of the operator (101) given in terms of productivity, while operating (204) the agricultural machine (102).
[10]
10. Computer-implemented method according to claim 6, characterized in that generating (272) a performance score for each of the performance categories comprises: generating (272) a logistical performance score (124) indicative of a performance of the operator (101) given in terms of logistics, while operating (204) the agricultural machine (102).
[11]
11. Computer implemented method according to claim 6, characterized in that generating (272) a performance score (124) for each of the performance categories comprises: generating (272) a fuel economy performance score (124) indicative of operator performance (101) given, in terms of fuel economy while operating (204) the machine
4 / 18 agricultural (102).
[12]
12. The computer-implemented method of claim 6, characterized in that generating (272) a performance score (124) for each of the performance categories comprises: generating (272) a material loss performance score harvested (124) indicative of an operator's performance (101) given in terms of the loss of harvested material.
[13]
13. The computer-implemented method of claim 6, characterized in that generating (272) a performance score (124) for each of the performance categories comprises: generating (272) a material quality performance score harvested (124) indicative of an operator's performance (101) given in terms of which material is harvested.
[14]
14. The computer-implemented method of claim 6, characterized in that generating (272) a performance score (124) for each of the performance categories comprises: generating (272) an energy utilization performance score (124) indicative of a given performance of the operator (101) in terms of energy utilization while operating (204) the agricultural machine (102).
[15]
15. Computer system, characterized in that it comprises: a data evaluation layer (104) that receives the first data (120) indicative of the sensed operating parameters, sensed while a given operator (101) operates (204) a given machine agricultural (102), the data evaluation layer (104) generating an evaluation value (122) indicative of an evaluation of the first data (120) against a set of reference data (156); a pillar score generation layer (106) that receives the
5 / 18 assessment value (122) and generates a performance pillar score (124), based on the assessment value (122), indicative of a given relative performance level of the operator (101) relative to a level of performance represented by the reference dataset (156); and a report generator component (188) that generates an operator performance report (110) for the operator (101) given based on the performance pillar score (124).
[16]
16. Method implemented on a computer, characterized in that it comprises: receiving (632) a set of data based on sensor signals relating to the parameters sensed in a mobile machine (102); evaluating (636) the data set to determine a degree of compliance with each of a plurality of actionable conditions; identifying (638) a recommendation to change the operation of the mobile machine (102) based on the degree to which each condition is met; and generate (652) an output based on the identified recommendation.
[17]
17. Computer-implemented method according to claim 16, characterized in that generating an output comprises: issuing the recommendation in a user interface.
[18]
18. Computer-implemented method according to claim 17, characterized in that receiving a set of data comprises: receiving operator performance data indicative of a performance of a given operator when operating the mobile machine; and evaluating operator performance data against benchmark data to obtain a performance metric indicative of how operator performance data compares to benchmark data.
[19]
19. Method implemented in computer according to
6/18 claim 17, characterized in that identifying a recommendation comprises: determining which of the actionable conditions are triggered conditions based on the degree of compliance for each condition; and identify conditions selected from the triggered conditions for which a recommendation should be issued.
[20]
20. A computer-implemented method according to claim 19, characterized in that determining which of the actionable conditions are triggered conditions comprises: comparing predefined items in the dataset, corresponding to a selected condition, against reference data for the predefined items to obtain a degree of fulfillment for each predefined item; and determine a degree of fulfillment of the selected condition based on the degree of fulfillment of each of the predefined items in the dataset.
[21]
21. Computer-implemented method according to claim 19, characterized in that issuing the recommendation comprises: issuing a recommendation, for a given triggered condition, which varies based on the degree of compliance with the given triggered condition, or which varies from according to a predefined function associated with the given triggered condition.
[22]
22. Computer-implemented method according to claim 19, characterized in that identifying a recommendation comprises: identifying the selected conditions for which a recommendation should be issued, based on a priority assigned to each condition.
7/18
[23]
23. Computer-implemented method according to claim 19, characterized in that identifying a recommendation comprises: identifying the selected conditions for which a recommendation should be issued, based on an amount of time since a recommendation was issued by last, or based on an targeted number of recommendations to be issued at any given time.
[24]
24. Computer-implemented method according to claim 16, characterized in that evaluating the dataset to determine the degree of compliance with a plurality of actionable conditions comprises: evaluating the dataset in relation to a condition of use of to determine whether changing the operation of the mobile machine will improve energy usage.
[25]
25. The computer-implemented method of claim 16, characterized in that evaluating the dataset to determine a degree of compliance with a plurality of actionable conditions comprises: evaluating the dataset against a loss condition. degrees to determine whether changing the operation of the mobile machine will improve grain loss.
[26]
26. The computer-implemented method of claim 16, characterized in that evaluating the dataset to determine a degree of compliance with a plurality of actionable conditions comprises: evaluating the dataset against a quality condition of to determine whether changing the mobile machine operation will improve grain quality.
[27]
27. Computer implemented method according to
8/18 claim 16, characterized in that evaluating the dataset to determine a degree of compliance with a plurality of actionable conditions comprises: evaluating the dataset against a fuel economy condition, to determine whether to change the Mobile machine operation will improve fuel economy.
[28]
28. The computer-implemented method of claim 16, characterized in that evaluating the dataset to determine a degree of compliance with a plurality of actionable conditions comprises: evaluating the dataset against a productivity condition for determine whether changing the operation of the mobile machine will improve productivity.
[29]
29. Computer system, characterized in that it comprises: a data evaluation layer (108) that receives (632) a set of data based on sensor signals indicative of parameters sensed in a mobile machine (102), and evaluates (636) the set of data in relation to a set of actionable conditions to identify a set of conditions triggered in the set of actionable conditions; a recommendation tool (184) that identifies a recommendation (192) to change the operation of the mobile machine (102) based on the set of triggered conditions; and an output component that provides (652) an output based on the recommendation (192).
[30]
30. Mobile machine, characterized in that it comprises: a data sensing layer (116, 118, 104, 106 and/or 108) that senses a set of sensor data indicative of parameters sensed in the mobile machine;
9/18 a data evaluation layer (108) that receives a dataset, based on the sensor dataset, and evaluates the dataset against a set of actionable conditions (185) to identify a set of conditions triggered in the set of actionable conditions; a recommendation tool (184) that identifies a recommendation (192) to change the operation of the mobile machine based on the set of triggered conditions; and an output component (141) that provides an output based on the recommendation.
[31]
31. A method comprising: receiving (744) operator performance information indicative of operator performance of a mobile machine across a plurality of different performance categories; comparing (748) the operator performance information to the benchmark performance information across the plurality of different performance categories; quantify (750) a set of performance improvement opportunities, through the plurality of different categories, based on the comparison; and issue (730) the quantified set of performance improvement opportunities in each of several different categories.
[32]
32. Method according to claim 31, characterized in that the set of performance improvement opportunities comprises: identifying the set of performance improvement opportunities, through the plurality of different categories, based on the comparison of performance information the operator with the benchmark performance information; and
10 / 18 determine a performance improvement metric value for each performance improvement opportunity, in each of several different categories.
[33]
33. Method according to claim 32, and characterized in that it further comprises: identifying a set of financial improvement opportunities, through the plurality of different categories, based on the set of performance improvement opportunities; and quantifying the set of financial improvement opportunities to indicate a currency value associated with each financial improvement opportunity in each category by determining a currency value associated with each performance improvement metric value in each of several different categories.
[34]
34. The method of claim 33, and characterized in that it further comprises: determining benchmark performance information across several different categories.
[35]
35. Method according to claim 34, characterized in that determining benchmark performance information comprises: identifying leadership performance information as performance information corresponding to a best performing operator in each of several different categories.
[36]
36. Method according to claim 35, characterized in that receiving operator performance information comprises: determining delay performance information as an average performance value for all operators in each of several categories different from the operator best performing in each
11 / 18 one of several categories.
[37]
37. Method according to claim 34, characterized in that determining the reference performance information comprises: identifying a theoretical optimal performance in each of the different categories, based on a current machine configuration of the mobile machine.
[38]
38. Method according to claim 34, characterized in that determining the reference performance information comprises: identifying a theoretical optimal performance in each of the different categories, based on an updated machine configuration of the mobile machine.
[39]
39. Method according to claim 33, characterized in that determining a performance improvement metric value comprises: identifying a number of time units that can be saved on each performance improvement opportunity; and identify the number of fuel units that can be saved for each performance improvement opportunity.
[40]
40. Computer system, characterized in that it comprises: a performance opportunity tool (676) that receives operator performance information and compares operator performance information with benchmark performance information to identify performance opportunities indicative of improvements in operating performance of a mobile machine across a plurality of different performance categories; a performance saving component (696) that generates
12 / 18 quantified performance savings values, indicative of performance savings across the plurality of different performance categories, based on identified performance opportunities; and a user interface component (682) that outputs an indication of performance opportunities and performance savings values quantified across the plurality of different categories.
[41]
41. The computer system of claim 40, and characterized in that it further comprises: a financial opportunity tool that receives the quantified performance savings values and generates the financial savings values corresponding to the quantified performance savings values.
[42]
42. Computer system according to claim 41, characterized in that the performance opportunity tool comprises: a benchmark component that obtains benchmark performance information that is indicative of at least one of the performance information corresponding to a reference operator, and performance information indicative of theoretical optimal performance, given a current mobile machine configuration.
[43]
43. Computer system according to claim 41, characterized in that the performance savings component generates the savings values quantified in units of fuel quantity and time.
[44]
44. Computer system according to claim 41, and characterized in that it further comprises: a recommendation tool that generates a set of recommendations indicative of changes in the operation of the mobile machine, which can be made to take advantage of performance opportunities
13 / 18 identified.
[45]
45. A computer-readable storage medium that stores computer-executable instructions that, when executed by a computer, cause the computer to execute a method characterized in that it comprises: receiving (744) operator performance information indicative of the performance of the operator of a mobile machine through a plurality of different performance categories; comparing (748) the operator performance information with benchmark performance information across the plurality of different performance categories; identify (748) a set of performance improvement opportunities, through the plurality of different categories, based on comparing the operator's performance information with the benchmark performance information; quantify (750) the set of performance improvement opportunities through the plurality of different categories; and issue (730) the quantified set of performance improvement opportunities in each of several different categories.
[46]
46. Computing system, characterized in that it comprises: a variation opportunity space tool (942) configured to identify an agronomic variation opportunity based on agronomic variation data indicative of the variation of sensed agronomic parameters (950-958) which are sensed by sensors in a mobile machine (102) as the mobile machine moves across a piece of land; and a prescriptive component system (944) configured to identify a set of prescriptive components (982) for the machine.
14 / 18 mobile (102) based on the opportunity for agronomic variation, in which the prescriptive components (982), when arranged in the mobile machine (102), equate the variation in agronomic parameters (950-958).
[47]
47. Computer system according to claim 46, and characterized in that it further comprises: a user interface component that issues a component identifier that identifies the set of prescriptive components.
[48]
48. Computing system according to claim 46, characterized in that the variation opportunity space tool comprises: an agronomic variation identifier component configured to receive the agronomic parameters and identify a variation value corresponding to each parameter agronomic, indicative of the variation of each agronomic parameter as the mobile machine moves through a field.
[49]
49. The computing system of claim 48, characterized in that the variation opportunity space tool comprises: a variation opportunity space identifier component configured to identify a variation opportunity space for the field, with based on the identified variation values; and an opportunity aggregation component that identifies a combined opportunity space, based on the variation opportunity space for the field and variation opportunity spaces for a plurality of additional fields.
[50]
50. Computer system according to claim 49, characterized in that the prescriptive component system comprises:
15 / 18 a component opportunity map that maps agronomic variation opportunities to a set of prescriptive components, the set of prescriptive components on the map including machine automation and control components, which automate portions of mobile machine control, configuration components machine that alter a mobile machine configuration and data collection components that modify a definition with which agronomic parameters are sensed; and an opportunity-to-component mapping tool configured to access the opportunity-to-component map and identify the set of prescriptive components based on the combined opportunity space.
[51]
51. A computing system according to claim 50, and characterized in that it further comprises: a financial opportunity space tool configured to estimate a financial impact corresponding to the disposition of the identified set of prescriptive components in the mobile machine and to issue an analysis financial impact including the financial impact.
[52]
52. Method, characterized by the fact that it comprises: receiving (1000) agronomic variation information indicative of the sensed agronomic information (950-958) in a plurality of different categories, the sensed agronomic information (950-958) being sensed by a set of sensors in a mobile machine (102) as the mobile machine (102) moves across a given portion of land; quantify (1006) a set of agronomic variation opportunities, through the plurality of different categories, based on agronomic variation information; identify (1010) a set of prescriptive components
16 / 18 (982) based on the set of agronomic variation opportunities; and output (1022) the quantified agronomic variation opportunity set and the corresponding prescriptive components (982).
[53]
53. A computing system comprising: a metric calculator system (1074) configured to receive (1112) individual operator performance information, the individual operator performance information being received for a plurality of different operators in a predefined team of operators, the individual operator performance information being indicative of a comparison of individual operator performance sensed when controlling a given mobile machine (102), among a plurality of different mobile machines (102) operated by the plurality of different operators in the team, with benchmark performance data (114), the metric calculator system (1074) being configured to calculate (1116) a performance score for each of several different operators on the team, based on the individual operator performance information ; a team score aggregator (1082) configured to combine (1140) the performance scores for each of several different operators on the team to obtain a team performance score; and a user interface component (1094) configured to output (1158) the team performance score with a user input mechanism that is actuated to view (1180) detailed information corresponding to each of the various operators.
[54]
54. Computing system according to claim 53, characterized in that the metric calculator system comprises:
17 / 18 a calculator component that generates a set of individual performance metrics, comprising a plurality of individual performance metrics, for each operator on the team; and a team member composite score generator configured to combine the set of individual performance metrics for each operator on the team to obtain a composite score for each operator on the team.
[55]
55. Computing system according to claim 54, characterized in that the team score aggregator is configured to calculate a set of team metrics different from the set of individual performance metrics for the team and to combine the set of team metrics with the composite scores for the operators on the team to get a composite performance score for the team.
[56]
56. The computing system of claim 55, and characterized in that it further comprises: a comparison component that generates a comparison display indicative of a comparison of the composite performance score for the team with the composite performance scores for others teams.
[57]
57. Computer system according to claim 54, characterized in that the calculator component is configured to identify a type of mobile machine being operated by a given operator and calculate a set of individual performance metrics that varies based on the type of identified mobile machine.
[58]
58. Computer system according to claim 57, characterized in that the calculator component is configured to identify a first operator in the team operating a combine harvester and a second operator in the team operating a tractor and to calculate a first
18 / 18 set of individual performance metrics for the combine operator and a second set of performance metrics for the tractor operator.
[59]
59. The computing system of claim 53, characterized in that the user interface component comprises at least one of a configuration of user interface devices consisting of: a user interface device configured to issue the score from the team to at least one operator on the team, in near real time; a UI device configured to output the team score to a team coordinator, who coordinates one or more different teams, in near real-time; a UI device configured to output the team score to a remote server system.
[60]
60. Method, characterized in that it comprises: sensing (206) individual operator performance parameters for a plurality of different operators, each operating a different mobile machine (102), in a predefined team of operators; comparing (244) the individual operator performance parameters to benchmark performance data (114); generate (116) a performance score for each of several different operators on the team, based on the comparison (244); generate (1148) a team performance score based on a combination of the performance scores for each of several different operators on the team; and issue (1158) the team performance score.
类似技术:
公开号 | 公开日 | 专利标题
US10311527B2|2019-06-04|Agronomic variation and team performance analysis
US10453018B2|2019-10-22|Agricultural information sensing and retrieval
US20150199630A1|2015-07-16|Operator performance opportunity analysis
US10380704B2|2019-08-13|Operator performance recommendation generation
US10028451B2|2018-07-24|Identifying management zones in agricultural fields and generating planting plans for the zones
EP3418957A1|2018-12-26|Combine harvester control interface for operator and/or remote user
US10437243B2|2019-10-08|Combine harvester control interface for operator and/or remote user
US9892376B2|2018-02-13|Operator performance report generation
EP3570228B1|2021-03-03|Machine control system using performance score based setting adjustment
BR102018012318A2|2019-01-15|Computer implemented method for controlling a mobile harvesting machine, and mobile harvesting machine
CN106105551B|2021-03-23|Sensing and display of crop loss data
US20200326727A1|2020-10-15|Zonal machine control
US20200090094A1|2020-03-19|Harvester control system
US11079725B2|2021-08-03|Machine control using real-time model
AU2017310240A1|2019-03-14|Delineating management zones based on historical yield maps
CN111802060A|2020-10-23|Work machine control using real-time models
EP3598721B1|2021-05-05|Detecting network congestions in a communication network
CN112889063A|2021-06-01|Automatic yield prediction and seed rate recommendation based on weather data
JP2022036523A|2022-03-08|Work management system, work management method and work management program
JP2022036524A|2022-03-08|Work management system, work management method and work management program
JP2021048788A|2021-04-01|Virtual utilization time calculation device
US20210321567A1|2021-10-21|Agricultural harvesting machine control using machine learning for variable delays
US10736266B2|2020-08-11|Control of settings on a combine harvester with bias removal
JP2021051482A|2021-04-01|Relative evaluation device
US20210302969A1|2021-09-30|Mobile work machine control based on control zone map data
同族专利:
公开号 | 公开日
US10311527B2|2019-06-04|
EP3095039A1|2016-11-23|
WO2015108633A1|2015-07-23|
EP3095039A4|2017-06-21|
US20150199775A1|2015-07-16|
CN105814552A|2016-07-27|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

EP0339141B1|1988-04-26|1994-07-06|New Holland Belgium N.V.|Method and apparatus for measuring grain loss in harvesting machines|
JP3684627B2|1994-12-28|2005-08-17|日産自動車株式会社|Variable valve operating device for vehicle internal combustion engine|
KR0179540B1|1995-01-23|1999-04-15|구자홍|Plate fin for fin tube type heat exchanger|
US5585757A|1995-06-06|1996-12-17|Analog Devices, Inc.|Explicit log domain root-mean-square detector|
US5751199A|1996-01-16|1998-05-12|Trw Inc.|Combline multiplexer with planar common junction input|
US5734849A|1996-07-01|1998-03-31|Sun Microsystems, Inc.|Dual bus memory transactions using address bus for data transfer|
US6995675B2|1998-03-09|2006-02-07|Curkendall Leland D|Method and system for agricultural data collection and management|
DK1095262T3|1998-06-29|2003-05-19|Deere & Co|Optoelectric apparatus for detecting damaged grains|
AU6974501A|2000-06-05|2001-12-17|Chem Equipment Company Inc Ag|System and method for creating controller application maps for site-specific farming|
EP1323099A2|2000-08-22|2003-07-02|Gary M. Schneider|System and method for developing a farm management plan for production agriculture|
US20020107624A1|2001-02-07|2002-08-08|Deere & Company, A Delaware Corporation|Monitoring equipment for an agricultural machine|
US6553300B2|2001-07-16|2003-04-22|Deere & Company|Harvester with intelligent hybrid control system|
DE10203653A1|2002-01-30|2003-07-31|Deere & Co|Agricultural or construction machine with a portable hand-held operating device that enables control of the vehicle from outside the cabin so that faulty operation can be easily detected|
US20040021563A1|2002-07-31|2004-02-05|Deere & Company|Method for remote monitoring equipment for an agricultural machine|
US7047133B1|2003-01-31|2006-05-16|Deere & Company|Method and system of evaluating performance of a crop|
US7184892B1|2003-01-31|2007-02-27|Deere & Company|Method and system of evaluating performance of a crop|
US6999877B1|2003-01-31|2006-02-14|Deere & Company|Method and system of evaluating performance of a crop|
US20050150202A1|2004-01-08|2005-07-14|Iowa State University Research Foundation, Inc.|Apparatus and method for monitoring and controlling an agricultural harvesting machine to enhance the economic harvesting performance thereof|
US20050171835A1|2004-01-20|2005-08-04|Mook David A.|System for monitoring economic trends in fleet management network|
US7317975B2|2004-02-03|2008-01-08|Haldex Brake Products Ab|Vehicle telematics system|
US20060030990A1|2004-08-06|2006-02-09|Anderson Noel W|Method and system for estimating an agricultural management parameter|
US7333922B2|2005-03-30|2008-02-19|Caterpillar Inc.|System and method of monitoring machine performance|
GB0507928D0|2005-04-20|2005-05-25|Cnh Belgium Nv|Input device for agricultural vehicle information display|
GB0507927D0|2005-04-20|2005-05-25|Cnh Belgium Nv|Agricultural vehicle and implement information display|
MX2007015627A|2005-06-10|2008-02-21|Pioneer Hi Bred Int|Method for use of environmental classification in product selection.|
US9129233B2|2006-02-15|2015-09-08|Catepillar Inc.|System and method for training a machine operator|
WO2008150948A1|2007-06-01|2008-12-11|Syngenta Participations Ag|Methods for the commercial production of transgenic plants|
GB0714942D0|2007-08-01|2007-09-12|Cnh Belgium Nv|A biomass cleaner improvements in corp harvesting machine and related methods|
US20090259483A1|2008-04-11|2009-10-15|Larry Lee Hendrickson|Method for making a land management decision based on processed elevational data|
US8175775B2|2008-06-11|2012-05-08|Cnh America Llc|System and method employing short range communications for establishing performance parameters of an exemplar agricultural machine among a plurality of like-purpose agricultural machines|
US9152938B2|2008-08-11|2015-10-06|Farmlink Llc|Agricultural machine and operator performance information systems and related methods|
US8280595B2|2008-08-12|2012-10-02|Cnh America Llc|System and method employing short range communications for communicating and exchanging operational and logistical status information among a plurality of agricultural machines|
US8489622B2|2008-12-12|2013-07-16|Sas Institute Inc.|Computer-implemented systems and methods for providing paginated search results from a database|
US9098820B2|2009-02-23|2015-08-04|International Business Machines Corporation|Conservation modeling engine framework|
CN101622928A|2009-08-10|2010-01-13|西北农林科技大学|Combine harvester remote control system|
FI20090447A|2009-11-26|2011-05-27|Ponsse Oyj|Method and device in connection with a forestry machine|
EP2512217A1|2009-12-17|2012-10-24|Nunhems B.V.|Tetraploid corn salad|
US8825281B2|2010-04-09|2014-09-02|Jacques DeLarochelière|Vehicle telemetry system and method for evaluating and training drivers|
US8469784B1|2010-04-16|2013-06-25|U.S. Department Of Energy|Autonomous grain combine control system|
US8463510B2|2010-04-30|2013-06-11|Cnh America Llc|GPS controlled residue spread width|
US20120253709A1|2010-12-30|2012-10-04|Agco Corporation|Automatic Detection of Machine Status for Fleet Management|
EP2676214A1|2011-02-17|2013-12-25|Nike International Ltd.|Tracking of user performance metrics during a workout session|
US8694382B2|2011-02-18|2014-04-08|Cnh America Llc|System and method for automatic guidance control of a vehicle|
CN103380439B|2011-03-10|2017-09-12|富士通株式会社|Agricultural operation householder method and agricultural operation servicing unit|
US9330062B2|2011-03-11|2016-05-03|Intelligent Agricultural Solutions, Llc|Vehicle control and gateway module|
US9607342B2|2011-07-18|2017-03-28|Conservis Corporation|GPS-based ticket generation in harvest life cycle information management system and method|
GB201200460D0|2011-12-21|2012-02-22|Agco Corp|Real-time determination of fleet revenue metric|
GB201200425D0|2011-12-21|2012-02-22|Agco Corp|Closed loop settings optimization using revenue metric|
DE102013106131A1|2012-07-16|2014-06-12|Claas Selbstfahrende Erntemaschinen Gmbh|Driver assistance system for agricultural machine|
US20140025440A1|2012-07-18|2014-01-23|Paresh L. Nagda|Aggregated performance indicator statistics for managing fleet performance|
DE102012021469A1|2012-11-05|2014-05-08|Claas Selbstfahrende Erntemaschinen Gmbh|Assistance system for optimizing vehicle operation|
US8965640B2|2012-11-30|2015-02-24|Caterpillar Inc.|Conditioning a performance metric for an operator display|
US9686902B2|2012-12-18|2017-06-27|Cnh Industrial America Llc|System and method for improving performance of an agricultural vehicle or implement|
KR20150103243A|2013-01-03|2015-09-09|크라운 이큅먼트 코포레이션|Tracking industrial vehicle operator quality|
US20140277905A1|2013-03-15|2014-09-18|Deere & Company|Methods and apparatus to manage a fleet of work machines|
US9403536B2|2013-08-12|2016-08-02|Deere & Company|Driver assistance system|
CN103453933A|2013-08-18|2013-12-18|吉林大学|Agricultural machine working parameter integrated monitoring platform and using method thereof|
WO2015035130A2|2013-09-05|2015-03-12|Crown Equipment Corporation|Dynamic operator behavior analyzer|
US9349228B2|2013-10-23|2016-05-24|Trimble Navigation Limited|Driver scorecard system and method|
CN103604613B|2013-11-25|2016-03-23|山东科大微机应用研究所有限公司|Fixing washboard-type farm machinery brake performance detector|
US9697491B2|2013-12-19|2017-07-04|Trapeze Software Ulc|System and method for analyzing performance data in a transit organization|
US10311527B2|2014-01-14|2019-06-04|Deere & Company|Agronomic variation and team performance analysis|
US20150199630A1|2014-01-14|2015-07-16|Deere & Company|Operator performance opportunity analysis|
US10453018B2|2014-01-14|2019-10-22|Deere & Company|Agricultural information sensing and retrieval|
US10380704B2|2014-01-14|2019-08-13|Deere & Company|Operator performance recommendation generation|
US9892376B2|2014-01-14|2018-02-13|Deere & Company|Operator performance report generation|
US10515425B2|2014-04-01|2019-12-24|The Climate Corporation|Agricultural implement and implement operator monitoring apparatus, systems, and methods|
US20160098637A1|2014-10-03|2016-04-07|Caterpillar Inc.|Automated Data Analytics for Work Machines|
US9792557B2|2015-01-14|2017-10-17|Accenture Global Services Limited|Precision agriculture system|
US20160212969A1|2015-01-26|2016-07-28|Sugar Tree Innovations LLC|Dispensing Apparatus|
US10791666B2|2015-06-08|2020-10-06|The Climate Corporation|Agricultural data analysis|
FR3061031A1|2016-12-22|2018-06-29|Suez Groupe|METHOD AND INSTALLATION FOR DENITRIFICATION OF COMBUSTION FUME|
US20180359917A1|2017-06-19|2018-12-20|Deere & Company|Remote control of settings on a combine harvester|
US20180359919A1|2017-06-19|2018-12-20|Deere & Company|Combine harvester control interface for operator and/or remote user|
US10310455B2|2017-06-19|2019-06-04|Deere & Company|Combine harvester control and communication system|
US10437243B2|2017-06-19|2019-10-08|Deere & Company|Combine harvester control interface for operator and/or remote user|US10311527B2|2014-01-14|2019-06-04|Deere & Company|Agronomic variation and team performance analysis|
US10453018B2|2014-01-14|2019-10-22|Deere & Company|Agricultural information sensing and retrieval|
US10380704B2|2014-01-14|2019-08-13|Deere & Company|Operator performance recommendation generation|
US10515425B2|2014-04-01|2019-12-24|The Climate Corporation|Agricultural implement and implement operator monitoring apparatus, systems, and methods|
US9728015B2|2014-10-15|2017-08-08|TrueLite Trace, Inc.|Fuel savings scoring system with remote real-time vehicle OBD monitoring|
US10491590B2|2015-10-12|2019-11-26|AssetWorks LLC|System and method for verifying and redirecting mobile applications|
US20170115833A1|2015-10-27|2017-04-27|Cnh Industrial America Llc|Top bar display for an agricultural system|
EP3282333B1|2016-08-12|2021-05-19|Siemens Aktiengesellschaft|A technique for monitoring technical equipment|
US10437240B2|2016-09-13|2019-10-08|Toyota Motor Engineering & Manufacturing North America, Inc.|Manufacturing evaluation system|
US10365640B2|2017-04-11|2019-07-30|International Business Machines Corporation|Controlling multi-stage manufacturing process based on internet of thingssensors and cognitive rule induction|
US10437243B2|2017-06-19|2019-10-08|Deere & Company|Combine harvester control interface for operator and/or remote user|
DE102019206817A1|2018-05-18|2019-12-12|Deere & Company|SELF-LEARNING CONTROL SYSTEM M FOR A MOBILE MACHINE|
US10310455B2|2017-06-19|2019-06-04|Deere & Company|Combine harvester control and communication system|
BR102018008598A2|2017-06-19|2019-03-19|Deere & Company|HARVEST CONTROL SYSTEM, HARVEST MACHINE CONTROL METHOD, AND HARVEST|
US10694668B2|2017-06-19|2020-06-30|Deere & Company|Locally controlling settings on a combine harvester based on a remote settings adjustment|
US10782672B2|2018-05-15|2020-09-22|Deere & Company|Machine control system using performance score based setting adjustment|
US10736266B2|2018-05-31|2020-08-11|Deere & Company|Control of settings on a combine harvester with bias removal|
US20200090094A1|2018-09-19|2020-03-19|Deere & Company|Harvester control system|
US20200245557A1|2019-01-31|2020-08-06|Cnh Industrial America Llc|Combine loss monitor mapping|
JP2020170474A|2019-04-05|2020-10-15|コベルコ建機株式会社|Skill information presentation system and skill information presentation method|
法律状态:
2020-03-24| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
优先权:
申请号 | 申请日 | 专利标题
US14/155023|2014-01-14|
US14/155,023|US9892376B2|2014-01-14|2014-01-14|Operator performance report generation|
US14/271,077|US10380704B2|2014-01-14|2014-05-06|Operator performance recommendation generation|
US14/271077|2014-05-06|
US14/445,699|US20150199630A1|2014-01-14|2014-07-29|Operator performance opportunity analysis|
US14/445699|2014-07-29|
US14/546725|2014-11-18|
US14/546,725|US10311527B2|2014-01-14|2014-11-18|Agronomic variation and team performance analysis|
PCT/US2014/069541|WO2015108633A1|2014-01-14|2014-12-10|Agronomic variation and team performance analysis|
[返回顶部]